0% found this document useful (0 votes)
55 views

Biometric Authentication - Person Identification Using Iris Recognition

Biometrics are used to offer person identification by measuring and analyzing people's unique physiological and behavioral characteristics.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Biometric Authentication - Person Identification Using Iris Recognition

Biometrics are used to offer person identification by measuring and analyzing people's unique physiological and behavioral characteristics.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Volume 7, Issue 5, May – 2022 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Biometric Authentication- Person Identification


using Iris Recognition
Ankur Jyoti Sarmah1, Gaurab Narayan Dutta, Dakshee Lahkar, Neelotpal Talukdar,Bimrisha Lahkar
Assistant Profrssor 1 Electronics and Telecommunication Engineering Department Assam Engineering College
Guwahati, Assam, India

Abstract:- Biometrics are used to offer person iris recognition model is one of the most natural methods
identification by measuring and analyzing people's used to identify individuals as the iris is fixed and
unique physiological and behavioral characteristics. As permanent and does not change throughout life. Moreover,
demands are increasing on this authentication it is impossible for two people to have the same iris features
technique a number of biometric modalities have even twins[1].
evolved and used like fingerprint reader, face identifier
and iris scanner. The human eye also has features and Using a unique characteristic in an individual’s iris,
patterns that are distinctive for recognition. Also, the iris recognition enables people to be identified. Other
low-cost equipment for utilizing this technique has methods like fingerprint readers may not be sufficient to
made it the most preferable framework in security handle the large variation in populations and also they can
reasons. As a reliable biometric authentication method, be copied. So it is considered as the most secure biometric
it is considered as the most explicit. authentication method available. A typical iris recognition
system made up of four stages namely, image acquisition,
In this paper, a standard database namely CASIA segmentation, feature extraction and pattern matching[2].
v3 is used for taking iris database and different Additionally pre-processing of can be done where image is
algorithms are used to recognize the targeted areas and resized, converted to gray scale and histogram equalization
the recognition process is performed. The algorithms we technique is used to enhance the image quality. In the
used are Morphology binary to remove the unwanted segmentation, iris and pupil is segmented and pixel values
backgrounds; Circular Hough Transform to detect the are binarized. Feature extraction collects the appropriate
iris; Gabor's filter to extract features of the human eye feature and pixel points from the image and classification
and KNN classifier to finally authenticate the query result shows the query image is authenticated or not.
image. The images from the database are acquired and
pre-processed before running through the algorithms. II. IRIS-STRUCTURE

Keywords:- IRIS, Gabor’s filter, KNN classifier, CASIA v3,


Circular Hough Transform.

I. INTRODUCTION

Biometrics is the measurement of physiological and


behavioralcharacteristics of a person that can be used to
identify them digitally and grant them access to devices,
data or systems..Every individual can be pinpointed by the
physiological or behavioral characteristics, which is the
basic postulate of biometric authentication. A biological
characteristic is a set of physiological characteristics
determined by a biometric system, and it consists of both
physiological and biological aspects.This specific Fig. 1: Human Eye Anatomy
collection contains DNA, Hand, Face, Earlobe, and
Iris.Biometrics concerning behavioral characteristics Iris characterization or patterns are unique to every
involves the assessment of the features which are not individual; even twins have significantly different iris
biological or physiological but which are dominated by a features or patterns, yet they remain the same throughout
biometric system. It comprises of four categories namely, their lifetime. Thus, this method is now recognizedfor
Signature, Voice, Gait and Keystroke recognition.The providing efficacious identification of a person without
components of a biometric device include: a scanning contact and with a high degree of confidence.
device or a reader for recording the biometric data, software
There are two portions to the front of the eye: the
for transforming scanned data into a standardized digital
sclera or "white" portion of the eye, and the cornea.The
format, a database for storing and comparing the recorded
sclera consists of closely interwoven fibers and a small
data, and a storage device for securing the biometric
segment in the front and center known as the cornea. The
data.Passwords, ID cards, or preset codes are common cornea consists of fibersarranged in regular fashion.
methods for identifying individuals that can be lost,
Conveniently, this makes the cornea transparent, allowing
forgotten, or stolen.Therefore there is a need for effective
light to filter in. Behind the cornea is the anterior chamber
and definitive methods for personal identification. The

IJISRT22MAY1505 www.ijisrt.com 1100


filled with a fluid known as the aqueous humor. A spongy
tissue, the ciliary bodies, arranged around the edge of the
cornea, constantly produces the aqueous humor. Immersed
in the aqueous humor is a ring of muscles commonly
referred to as iris [3].Apparently, the term was first used in
the sixteenth century to refer to this multicolored portion of
the eye [4].In front of the lens, the iris extends outward in a
circular pattern, with a variable opening in the center,
otherwise known as the pupil [5]. In addition to the iris, Fig. 3: Original image to Resized image
which is composed of two bands of muscles, the dilator,
which contracts to enlarge the pupil, and the sphincter, b) Gray-Scale Conversion:
which contracts to reduce the pupil size, all control the Grey-scale refers to shades of gray that do not have
pupil.[6]. apparent colors. As a rule of thumb, black is the
darkest color and white is the lightest. A grayscale
III. METHODOLOGY image only has one color channel. The colored image
is converted to Grayscale and if the image is gray, it
Gabor's technique is used for filtering purposes in this remains the same. Red, Green, and Blue (RGB) are
study, and KNN classifiers are used for image the three colors of a pixel in an image.In order to
classification.CASIA iris V3 database is used to test these convert a color image to grayscale, its RGB values
techniques [7]. (24 bits) are converted to grayscale values (8 bits).
The following flow diagram illustrates the steps
involved in iris recognition: -

Fig. 4: Original image to Grayscale image

c) Histogram Equalization:
Histogram equalization is an example of Histogram
modelling techniques. By modifying the intensity
distribution of the histogram, it enables one to adjust
the dynamics of a picture and the contrast. Using this
Fig. 2: Steps involved for iris recognition technique technique, the cumulative probability function
associated with an image is given a linear trend.
A. Image Aacquisition or Image Capturing: Therefore, this technique, enhances the contrast and
Iris recognition starts with the step of image quality of the image.
acquisitioning or image capturing. Various people's irises
have different sizes and colors, so this step is quite
complicated. In this project iris pictures are obtained from
the dataset.

B. Pre-Processing of Input Images:


This process is done in three steps, namely, Image
Resize, Grayscale Conversion and Histogram Equalization.
Each step is described in detail below:

a) Image-Resizing: Fig. 5: Original image Grayscale image


To eliminate the problem of different iris sizes in a
single database, images are resized to eliminate the C. Segmentation
problem of different resolutions. Obtaining the same An iris picture is segmented by automatically detecting
features on all images is facilitated by this method the boundary region of the iris and the pupil in order to
[6]. In this step, the image is resized to 256x256 exclude the surrounding areas.The main purpose of
pixels. Figures below shows the primary image of the segmentation is to remove non-useful regions, such as the
CASIA database (320x280 pixels), and the resized parts outside the iris, and then convert these parts to a
image (256×256 pixels). suitable template during normalization.[6] The
segmentation process is done in two techniques
Morphology Binary and Circular Hough Transformation.

a) Morphology Binary:
Using a small template, a structuring element is
placed at different locations in the image and
compared with different neighboring pixels. It
determines whether it is overlaps the neighboring

IJISRT22MAY1505 www.ijisrt.com 1101


pixels or intersects/strikes with the neighborhood. D. Feature Extraction
Thus, a binary image is created with non-zero-pixel Extracting features from the iris picture is the principle
values for overlapping or striking and zero for stage in iris recognition system; especially this system
neither overlapping nor striking any pixels. Thus, depends on the features drawn out from iris pattern. The
dilation and erosion operations take place which adds features will be extracted in the form of pixel values. The
and removes pixels from object boundaries. In the Gabor’s Filtering method is used for this step. It checks
morphological binary operation, the image will be whether there is any particular frequency component in the
segmented (iris and pupil) by the morphology binary image in a definite direction in the domain. It is a filtering
and the other background will be eliminated. Here method which takes features as textures. It extracts 12
the threshold value will be calculated and using the values of features based on the different filtering levels.
threshold value, the image will be converted into The features will be extracted based on two values i.e.
binary image[8]. Mean and Standard deviation. Therefore, there will be 24
sets of values from the features.

2D Gabor’s filters are applied to the image data to


gather its phase information. Breakdown of a signal is done
by using a fourth-order pair of Gabor’s filters, with real part
given by cosine varying by a Gaussian, an imaginary part
given by sine varying by a Gaussian. The real and
imaginary elements of the filter also known as even and
odd symmetric components. The mid-frequency of the filter
Fig. 6: Original image to Binarized image is given by the frequency of the sine or cosine wave and the
bandwidth is given by the breadth of the Gaussian wave.
b) Circular Hough Transformation: The 2d Gabor filter for a domain (x,y) is given by-
Circles in images can be found using the circular
Hough Transform (CHT). This transform is used to −𝜋[+(
𝑥−𝑥𝑜 2 𝑦−𝑦𝑜 2
) +( ) ]
determine parameters of simple geometric objects. 𝐺(𝑥, 𝑦) = 𝑒 𝛼 𝛽 𝑒 −2𝜋𝑡[𝑢𝑜(𝑥−𝑥𝑜)+𝑣𝑜 (𝑦−𝑦𝑜)]
Using the CHT, the radius and center co-ordinates of
the pupil and iris boundaries can be determined. In Where (x0, y0) refers to the position, refers to
the segment iris section, there are three outputs- the breadth, and length and (u0, v0) gives modulation which
circle iris, circle pupil and image with noise. After has frequency given by:
obtaining the coordinates of the iris region, the
coordinates of the pupil region will be 𝜔𝑜 = √𝑢𝑜2 + 𝑣𝑜2 9 .
extracted.Mathematically, a circle in 2D plane can be
described by: The filter consists of the real part and the imaginary
section which provides the orthogonal directions in the
(x- a)2 + (y-b)2 = r2 image. The above two real and imaginary sections of the
filter can be establish into a complex number or may be
where, (a,b) is the center of the circle, and r is the used singularly.
radius, & x = a + rcosθ Complex
y = b + rsinθ
𝑔(𝑥, 𝑦; 𝜆, 𝜃, 𝜓, 𝜎, 𝛾)
The edges are detected by an edge detecting technique
and the boundary of the circle is created. For a point(x,y) in 𝑥′2 + 𝛾 2 𝑦 ′2 𝑥′
= 𝑒𝑥𝑝 (− ) 𝑒𝑥𝑝 (𝑖 (2𝜋
the boundary of the circle, we draw numerous circles with 2𝜎 2 𝜆
increasing radius such that if forms the shape of an inverted
right-angled cone whose apex is at (x, y, 0). This step will + 𝜓))
be iterated for each and every point in the edge of the circle
and the circle parameters (a,b,r) will be determined by the Real
intersection of the inverted right angled cones. The
common intersection point of the conic surfaces will give 𝑔(𝑥, 𝑦; 𝜆, 𝜃, 𝜓, 𝜎, 𝛾)
the center of the circle and after that the radius is found in 𝑥′2 + 𝛾 2 𝑦 ′2 𝑥′
= 𝑒𝑥𝑝 (− ) cos (2𝜋 + 𝜓)
from the boundary and center of the circle. 2𝜎 2 𝜆

Imaginary

𝑔(𝑥, 𝑦; 𝜆, 𝜃, 𝜓, 𝜎, 𝛾)
𝑥′2 + 𝛾 2 𝑦 ′2 𝑥′
= 𝑒𝑥𝑝 (− ) sin (2𝜋 + 𝜓)
2𝜎 2 𝜆

where
𝑥 ′ = 𝑥 cos 𝜃 + 𝑦 sin 𝜃

Fig. 7: Original image to Circular Hough Transform and


𝑦 ′ = −𝑥 sin 𝜃 + ycos 𝜃

IJISRT22MAY1505 www.ijisrt.com 1102


Classification result shows the query image is
Let x=[x1 x2]T be the image coordinates. The impulse authenticated or not. The K-Nearest Neighbors (KNN)
response of a filter g(x) is provided by: classifier is used for the classification purpose. In this step,
we have to give three features as input. They are Test
1 1 𝑇 𝑇 feature, Train feature and Label feature.
𝑔𝑚𝑛 (𝑥) = 𝑒 −2𝑥 𝐴𝑚𝑛 𝑥 𝑒 𝑗𝑘0𝑚𝑛 𝑥
2𝜋𝑎𝑛 𝑏𝑛  Test feature- the test feature represents the features of the
input image.
Here, A matrix gives the bandwidth, and orientation  Train feature- the train feature represents the features of
or direction selectiveness of the filter. the dataset images.
 Labels feature- the labels feature will detect whether the
𝐴𝑚𝑛 image is authenticated or not.
cos 𝜙𝑚 − sin 𝜙𝑚 𝑎𝑛−2 0 cos 𝜙𝑚 sin 𝜙𝑚
=[ ][ −2 ] [−sin 𝜙 ]
sin 𝜙𝑚 cos 𝜙𝑚 0 𝑏𝑛 𝑚 cos 𝜙𝑚 This classifier is one of the simplest algorithms for
categorizing objects, it operates under supervision. The
The real and imaginary section of the impulse algorithm stores all accessible cases and assigns
response of the filter is given in the figure below: classifications to new cases based on a similarity measure,
just as it does with distance functions. As a non-parametric
approach, KNN was already used in statistical
approximations and pattern recognition. Several machine
learning applications use KNN, including regression and
pattern recognition. Additionally, it is very easy to
implement and highly effective in a variety of applications
Fig. 8: Real and imaginary section of a Gabor’s Filter that use classification techniques. In general, it takes into
account the closest value in a feature space of the check
If g(k) is the transfer function of the Gabor filter, then data. Additionally, it is nonparametric, as the variables used
it is given by: are not assumed to have probability distributions. KNN
1 𝑇 algorithm classifies the objects with the help of three steps:
(𝑘−𝑘𝑚)𝑇 (𝐴−1
𝑚𝑛 ) (𝑘−𝑘𝑚 )
𝐺𝑚𝑛 (𝑘) = 𝑒 −2  Measures the distance among all training vectors and test
vector.
where k = [k1 k2], T is the spatial frequency.  Chooses K closest vectors.
 Computes the average of closest vectors distances.
2d Gabor’s filter have a very well-to-do approach in
processing of an image, basically in extraction of features In other words, essentially, in KNN, the output is the
for analysis of texture and dis-membranate or segmentation class membership. Objects are classified according to the
in an iris picture. By variation, we can inspect for the number of votes they receive from their neighbors. Having
consistency and features in a particular direction and also we k nearest neighbors, if K = 1 means that the object belongs
can change the image region’s size being examined around to the group of the nearest neighbor. KNN, on the other
the region of interest. 2D Gabor’s filter in discrete domain hand, does not have a particular way to choose K, it is a
are provided as below: method to choose the best one. In contrast to other
𝑖2 +𝑗2
algorithms, KNN doesn't require training examples, rather,
(− ) it uses the existing ones.Further, it uses training sets
𝐺𝑒 [𝑖, 𝑗] = 𝐵𝑒 2𝜎2 cos(2𝜋𝑓(𝑖 cos 𝜃 + 𝑗 sin 𝜃))
directly to train, allowing it to classify inputs with k values
(−
𝑖2 +𝑗2
)
when inputs and training sets are given[6].
𝐺𝑠 [𝑖, 𝑗] = 𝐵𝑒 2𝜎2 sin(2𝜋𝑓(𝑖 cos 𝜃 + 𝑗 sin 𝜃))
Using the K-Nearest Neighbor classifier, the iris
where B and C are normalizing factors to be image is checked for performance rate based on their
determined [10]. ACCURACY, SPECIFICTY and SENSITIVITY.

Accuracy: Correctly classified samples / classified


samples i.e., TP + TN/(TP + TN + FP + FN).

Sensitivity: Correctly classified positive samples /


True positive rate i.e, TP/(TP + FN).

Specificity: Correctly classified negative samples /


True negative samples i.e., TN/(FP + TN).

Where TN means true negative, TP means true


positive & FN means false negative, FP means false
positive[11].

Further the query image is checked for authentication.


Fig. 9: Gabor Filter Extraction
IV. APPLICATIONS
E. Classification

IJISRT22MAY1505 www.ijisrt.com 1103


 There is a demand in finding accurate, secure and cost- [7.] Minhas and M. Y. Javed, "Iris feature extraction using
efficient alternatives to personal identification numbers gabor filter," 2009 International Conference on
(PIN) and passwords in e-security systems to securely Emerging Technologies, 2009, pp. 252-255.
access finance transactions, bank accounts or credit cards. [8.] Nick Efford. Digital Image Processing: A Practical
 Biometric provides the stage to fulfill these demands as Introduction. Pearson Education, 2000.
an individual’s biometric data is distinctive and non- [9.] Das, A., Recognition of Human Iris Patterns. 2012.
transferrable. [10.] Muhammad Younus Javed, Iris feature extraction
 Using biometrics as a method of network authentication using gabor filter. 2009.
adds a unique identifier that is extremely difficult to [11.] https://towardsdatascience.com/confusion-matrix-for-
reproduce, making it the most reliable authentication your-multi-class-machine-learning-model-
method. ff9aa3bf7826.
 It is being used to replace passports, aviation security,
data base access and computer login, premise control,
birth certificates.

V. CONCLUSION

In this paper, examination is done on how the system


behaves when an input is acquired and passed through all
the algorithms and finally tested whether the query image is
authenticated or not. The system is experimented for a
number of eye images and the performance rate determined
by the factors Accuracy, Sensitivity and Specificity are
calculated. The Accuracy, Sensitivity and Specificity for a
particular image found are 92.8571%, 85.7143% and 100%
respectively. In this project an iris image is taken, pre-
processed, unwanted backgrounds are eliminated, iris and
pupil segmented, features of iris extracted and finally
classified for the matching purpose. The results of these
calculations have been analyzed by using different
mathematical functions.MATLAB2016Ra is being used for
development, and the emphasis is being placed on the
software for capturing the eye image rather than hardware
to perform the recognition.

REFERENCES

[1.] Dhamala, P., Multibiometric systems. 2012, Ph. D.


thesis, Norwegian University of Science and
Technology.
[2.] Li YH., Savvides M. (2009), Iris Recognition.
Overview. In: LI S Z., Jain A (eds) Encyclopedia of
Biometrics. Springer, Boston, MA.
[3.] Kalyani R. Rawate, Prof. P.A.Tijare (2017), Human
Identification Using IRIS Recognition, International
Journal of Scientific Research in Science, Engineering
and Technology, Vol. 3, Issue 2, pp. 578-584.
[4.] Wildes, Richard P, Iris Recognition: An Emerging
Biometric Technology, Proceedings of the IEEE. Vol.
85, NO. 9, (1999), pp.1348-1363.
[5.] Daugman J. G., High coifidence visual recognition of
persons by a test of statistical independence, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Volume: 15, No. 1 I, (l993), pp. 1148-1
161.
[6.] Mohamed Alhamrouni, Iris Recognition by Image
Processing Techniques. 2017.

IJISRT22MAY1505 www.ijisrt.com 1104

You might also like