Iris Scanning Seminar Report
Iris Scanning Seminar Report
Chapter1:INTRODUCTION
1.1:Biometric Technology
1.2Classification of Biometrics
Disadvantages
Even though conventional methods of identification are indeed inadequate, the
biometric technology is not as pervasive and wide spread as many of us expect it to be. One
of the primary reasons is performance. Issues affecting performance include accuracy, cost,
integrity etc.
Accuracy
Even if a legitimate biometric characteristic is presented to a biometric system,
correct authentication cannot be guaranteed. This could be because of sensor noise,
limitations of processing methods, and the variability in both biometric characteristic as well
as its presentation.
Cost
Cost is tied to accuracy; many applications like logging on to a pc are sensitive to
additional cost of including biometric technology.
IRIS
HAND
RETINA
COST SIGNATURE
FACE FINGER
VOICE
ACCURACY
FIGURE 2. COMPARISON BETWEEN COST AND ACCURACY
Chapter2:IRIS RECOGNITION
2.1 Anatomy of the Human Iris
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human
eye. A front-on view of the iris is shown in Figure 1.1. The iris is perforated close to its
center by a circular aperture known as the pupil. The function of the iris is to control the
amount of light entering through the pupil, and this is done by the sphincter and the dilator
muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the
pupil size can vary from 10% to 80% of the iris diameter [2]. The iris consists of a number of
layers; the lowest is the epithelium layer, which contains dense pigmentation cells. The
stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and
the two iris muscles. The density of stromal pigmentation determines the colour of the iris.
The externally visible surface of the multi-layered iris contains two zones, which often differ
in colour [3]. An outer ciliary zone and an inner pupillary zone, and these two zones are
divided by the collarette – which appears as a zigzag pattern.
Segmentation
The first stage of iris recognition is to isolate the actual iris region in a digital eye image.
The iris region, shown in Figure 1.1, can be approximated by two circles, one for the
iris/sclera boundary and another, interior to the first, for the iris/pupil boundary. The eyelids
and eyelashes normally occlude the upper and lower parts of the iris region. Also, specular
reflections can occur within the iris region corrupting the iris pattern. A technique is required
to isolate and exclude these artifacts as well as locating the circular iris region.
This can be done by using the following techniques
Hough Transform
Daugman Integro- Differential operator
Hough Transform:
The Hough transform is a standard computer vision algorithm that can be used to
determine the parameters of simple geometric objects, such as lines and circles, present in an
image. The circular Hough transform can be employed to deduce the radius and centre
coordinates of the pupil and iris regions. An automatic segmentation algorithm based on the
circular Hough transform is employed by Wildes et al.Firstly, an edge map is generated by
calculating the first derivatives of intensity values in an eye image and then thresholding the
result. From the edge map, votes are cast in Hough space for the parameters of circles passing
through each edge point. These parameters are the centre coordinates xc and yc, and the
radius r, which are able to define any circle according to the equation
A maximum point in the Hough space will correspond to the radius and centre coordinates of
the circle best defined by the edge points. Wildes et al. make use of the parabolic Hough
transform to detect the eyelids, approximating the upper and lower eyelids with parabolic
arcs, which are represented as;
In performing the preceding edge detection step, Wildes et al. bias the derivatives in the
horizontal direction for detecting the eyelids, and in the vertical direction for detecting the
outer circular boundary of the iris, this is illustrated in Figure shown below. The motivation
for this is that the eyelids are usually horizontally aligned, and also the eyelid edge map will
corrupt the circular iris boundary edge map if using all gradient data. Taking only the vertical
gradients for locating the iris boundary will reduce influence of the eyelids when performing
circular Hough transform, and not all of the edge pixels defining the circle are required for
successful localisation. Not only does this make circle localisation more accurate, it also
makes it more efficient, since there are less edge points to cast votes in the Hough space.
Figure 2.1– a) an eye image b) corresponding edge map c) edge map with only horizontal
gradients d) edge map with only vertical gradients.
Daugman’s Integro-differential Operator
Daugman makes use of an integro-differential operator for locating the circular iris and pupil
regions, and also the arcs of the upper and lower eyelids. The integro-differential operator is
defined as
where I(x,y) is the eye image, r is the radius to search for, Gσ(r) is a Gaussian smoothing
function, and s is the contour of the circle given by r, x0, y0. The operator searches for the
circular path where there is maximum change in pixel values, by varying the radius and
centre x and y position of the circular contour. The operator is applied iteratively with the
amount of smoothing progressively reduced in order to attain precise localisation. Eyelids are
localised in a similar manner, with the path of contour integration changed from circular to an
arc.
Figure - Stages of segmentation with original eye image Top right) two circles overlaid for
iris and pupil boundaries, and two lines for top and bottom eyelid Bottom left) horizontal
lines are drawn for each eyelid from the lowest/highest point of the fitted line Bottom right)
probable eyelid and specular reflection areas isolated (black areas)
2.3.2:Image Normalization
Once the iris region is successfully segmented from an eye image, the next stage is to
transform the iris region so that it has fixed dimensions in order to allow comparisons. The
dimensional inconsistencies between eye images are mainly due to the stretching of the iris
caused by pupil dilation from varying levels of illumination. Other sources of inconsistency
include, varying imaging distance, rotation of the camera, head tilt, and rotation of the eye
within the eye socket. The normalisation process will produce iris regions, which have the
same constant dimensions, so that two photographs of the same iris under different conditions
will have characteristic features at the same spatial location.
This is done by following Techniques
The homogenous rubber sheet model devised by Daugman [1] remaps each point within the
iris region to a pair of polar coordinates (r,θ) where r is on the interval [0,1] and θ is angle
[0,2π]
.
where I(x,y) is the iris region image, (x,y) are the original Cartesian coordinates, (r,θ) are the
corresponding normalised polar coordinates, and Xp,Yp and Xl,Yl are the coordinates of the
pupil and iris boundaries along the θ direction. The rubber sheet model takes into account
pupil dilation and size inconsistencies in order to produce a normalised representation with
constant dimensions. In this way the iris region is modelled as a flexible rubber sheet
anchored at the iris boundary with the pupil centre as the reference point.
Fig : Illustration of the normalisation process for two images of the same iris taken under
varying conditions
Normalisation of two eye images of the same iris is shown in Figure . The pupil is smaller in
the bottom image, however the normalisation process is able to rescale the iris region so that
it has constant dimension. In this example, the rectangular representation is constructed from
10,000 data points in each iris region. Note that rotational inconsistencies have not been
accounted for by the normalisation process, and the two normalised patterns are slightly
misaligned in the horizontal (angular) direction. Rotational inconsistencies will be accounted
for in the matching stage.
avelet Encoding Wavelets can be used to decompose the data in the iris region into
components that appear at different resolutions. Wavelets have the advantage over traditional
Fourier transform in that the frequency data is localised, allowing features which occur at the
same position and resolution to be matched up. A number of wavelet filters, also called a
bank of wavelets, is applied to the 2D iris region, one for each resolution with each wavelet a
scaled version of some basis function. The output of applying the wavelets is then encoded in
order to provide a compact and discriminating representation of the iris pattern.
Gabor Filters Gabor filters are able to provide optimum conjoint representation of a signal in
space and spatial frequency. A Gabor filter is constructed by modulating a sine/cosine wave
with a Gaussian. This is able to provide the optimum conjoint localisation in both space and
frequency, since a sine wave is perfectly localised in frequency, but not localised in space.
Modulation of the sine with a Gaussian provides localisation in space, though with loss of
localisation in frequency. Decomposition of a signal is accomplished using a quadrature pair
of Gabor filters, with a real part specified by a cosine modulated by a Gaussian, and an
imaginary part specified by a sine modulated by a Gaussian. The real and imaginary filters
are also known as the even symmetric and odd symmetric components respectively. The
centre frequency of the filter is specified by the frequency of the sine/cosine wave, and the
bandwidth of the filter is specified by the width of the Gaussian. Daugman makes uses of a
2D version of Gabor filters [1] in order to encode iris pattern data. A 2D Gabor filter over the
an image domain (x,y) is represented as
where (xo,yo) specify position in the image, (α,β) specify the effective width and length, and
(uo, vo) specify modulation, which has spatial frequency .The odd symmetric and even
symmetric 2D Gabor filters are shown in Figure shown below
Fig : quadrature pair of 2D Gabor filters left) real component or even symmetric filter
characterised by a cosine modulated by a Gaussian right) imaginary component or odd
symmetric filter characterised by a sine modulated by a Gaussian.
Matching:
For matching, the Hamming distance was chosen as a metric for recognition, since bit-wise
comparisons were necessary. The Hamming distance algorithm employed also incorporates
noise masking, so that only significant bits are used in calculating the Hamming distance
between two iris templates. Now when taking the Hamming distance, only those bits in the
iris pattern that correspond to ‘0’ bits in noise masks of both iris patterns will be used in the
calculation. The Hamming distance will be calculated using only the bits generated from the
true iris region, and this modified Hamming distance formula is given as
where Xj and Yj are the two bit-wise templates to compare, Xnj and Ynj are the
corresponding noise masks for Xj and Yj, and N is the number of bits represented by each
template. Although, in theory, two iris templates generated from the same iris will have a
Hamming distance of 0.0, in practice this will not occur. Normalisation is not perfect, and
also there will be some noise that goes undetected, so some variation will be present when
comparing two intra-class iris templates. In order to account for rotational inconsistencies,
when the Hamming distance of two templates is calculated, one template is shifted left and
right bit-wise and a number of Hamming distance values are calculated from successive
shifts. This bit-wise shifting in the horizontal direction corresponds to rotation of the original
iris region by an angle given by the angular resolution used. If an angular resolution of 180 is
used, each shift will correspond to a rotation of 2 degrees in the iris region. This method is
suggested by Daugman , and corrects for misalignments in the normalised iris pattern caused
by rotational differences during imaging. From the calculated Hamming distance values, only
the lowest is taken, since this corresponds to the best match between two templates. The
number of bits moved during each shift is given by two times the number of filters used,
since each filter will generate two bits of information from one pixel of the normalised
region. The actual number of shifts required to normalise rotational
inconsistencies will be determined by the maximum angle difference between two images of
the same eye, and one shift is defined as one shift to the left, followed by one shift to the
right. The shifting process for one shift is illustrated in Figure below.
Fig: An illustration of the shifting process. One shift is defined as one shift left, and one shift
right of a reference template. In this example one filter is used to encode the templates, so
only two bits are moved during a shift. The lowest Hamming distance, in this case zero, is
then used since this corresponds to the best match between the two templates.
2.2.4Accuracy
The Iris Code constructed from these Complex measurements provides such a tremendous
wealth of data that iris recognition offers level of accuracy orders of magnitude higher than
biometrics. Some statistical representations of the accuracy follow:
The odds of two different irises returning a 75% match (i.e. having Hamming
Distance of 0.25): 1 in 10 16.
Equal Error Rate (the point at which the likelihood of a false accept and false reject
are the same): 1 in 12 million.
The odds of two different irises returning identical Iris Codes: 1 in 1052
Iris recognition can also accounts for those ongoing changes to the eye and iris which
are defining aspects of living tissue. The pupil’s expansion and contraction, a constant
process separate from its response to light, skews and stretches the iris. The algorithms
account for such alteration after having located at the boundaries of the iris. Dr. Daugman
draws the analogy to a ‘homogenous rubber sheet” which, despite its distortion retains certain
consistent qualities. Regardless of the size of the iris at any given time, the algorithm draws
on the same amount or data, and its resultant Iris Code is stored as a 512 byte template. A
question asked of all biometrics is there is then ability to determine fraudulent samples. Iris
recognition can account for this in several ways the detection of pupillary changes, reflections
from the cornea detection of contact lenses atop the cornea and use of infrared illumination to
determine the state of the sample eye tissue.
degree to which any decrease in one error rate must be paid for by an increase in the other
error rate, when decision thresholds are varied. It reflects the intrinsic separability of the two
distributions.
Decidability metrics such as d’ can he applied regardless of what measure of
similarity a biometric uses. In the particular case of iris recognition, the similarly measure is a
Hamming distance: the fraction of bits in two iris Codes that disagree. The distribution on the
left in the graph shows the result when different images of the same eye are compared:
typically about 10% of the bits may differ. But when Iris Codes from different eyes are
compared: With rations to look for and retain the best match. The distribution on the right is
the result. The fraction of disagreeing bits is very tightly packed around 45%. Because of the
narrowness of this right-hand distribution, which belongs to the family of binomial extreme-
value distributions, it is possible to make identification decisions with astronomical levels of
confidence. For example, the odds of two different irises agreeing just by chance in more
than 75 of their Iris Code bits, is only one in 10-to-the- 14 th power. These extremely low
probabilities of getting a False Match enable the iris recognition algorithms to search through
extremely large databases, even of a national or planetary scale, without confusing one Iris
Code for another despite so many error opportunities.
histogram is a plot of such a binomial probability distribution. It gives an extremely exact fit
to the observed distribution, as may be seen by comparing the solid curve to the data
histogram.
The above two aspects show that Hamming Distance comparisons between different
Iris Codes are binomially distributed, with 244 degrees of freedom. The important corollary
of this conclusion is that the tails of such distributions are dominated by factorial
combinatorial factors, which attenuate at astronomic rates. This property makes it extremely
improbable that two different Iris Codes might happen to agree just by chance in, say, more
than 2/3rds of their bits (making a Hamming Distance below 0.33 in the above plot). The
confidence levels against such an occurrence are the reason why iris recognition can afford to
search extremely large databases, even on a national scale, with negligible probability of
making even a single false match.
5.1Adavntages
Highly protected, internal organ of the eye
Externally visible; patterns imaged from a distance
Iris patterns possess a high degree of randomness
o Variability: 244 degrees-of-freedom
o Entropy: 3.2 bits per square-millimeter
o Uniqueness: set by combinatorial complexity
Changing pupil size confirms natural physiology
Pre-natal morphogenesis (7th month of gestation)
Limited genetic penetrance of iris patterns
Patterns apparently stable throughout life
Encoding and decision-making are tractable
Image analysis and encoding time: 1 second
Decidability index (d-prime): d' = 7.3 to 11.4
Search speed: 100,000 Iris Codes per second
Chapter6:Applications
Iris-based identification and verification technology has gained acceptance in a
number of different areas. Application of iris recognition technology can he limited only by
imagination. The important applications are those following:
ATM’s and iris recognition: in U.S many banks incorporated iris recognition
technology into ATM’s for the purpose of controlling access to one’s bank
accounts. After enrolling once (a “30 second” process), the customer need only
approach the ATM, follow the instruction to look at the camera, and be recognized
within 2-4 seconds. The benefits of such a system are that the customer who
chooses to use bank’s ATM with iris recognition will have a quicker, more secure
transaction.
Applications of this type are well suited to iris recognition technology. First, being
fairly large, iris recognition physical security devices are easily integrated into the mountable,
sturdy apparatuses needed or access control, The technology’s phenomenal accuracy can be
relied upon to prevent unauthorized release or transfer and to identify repeat offenders re-
entering prison under a different identity.
Computer login: The iris as a living password.
National Border Controls: The iris as a living password.
Telephone call charging without cash, cards or PIN numbers.
Ticket less air travel.
Premises access control (home, office, laboratory etc.).
Driving licenses and other personal certificates.
Entitlements and benefits authentication.
Forensics, birth certificates, tracking missing or wanted person
Credit-card authentication.
Automobile ignition and unlocking; anti-theft devices.
Anti-terrorism (e.g.:— suspect Screening at airports)
Secure financial transaction (e-commerce, banking).
Internet security, control of access to privileged information.
“Biometric—key Cryptography “for encrypting/decrypting messages.
CONCLUSION
The technical performance capability of the iris recognition process far surpasses that
of any biometric technology now available. Iridian process is defined for rapid exhaustive
search for very large databases: distinctive capability required for authentication today. The
extremely low probabilities of getting a false match enable the iris recognition algorithms to
search through extremely large databases, even of a national or planetary scale. As iris
technology grows less expensive, it could very likely unseat a large portion of the biometric
industry, e-commerce included; its technological superiority has already allowed it to make
significant inroads into identification and security venues which had been dominated by other
biometrics. Iris-based biometric technology has always been an exceptionally accurate one,
and it may soon grow much more prominent.
REFERENCES
1. Daugman J (1999) "Wavelet demodulation codes, statistical independence, and
pattern recognition." Institute of Mathematics and its Applications, Proc.2nd IMA-
IP. London: Albion, pp 244 - 260.
2. Daugman J (1999) "Biometric decision landscapes." Technical Report No TR482,
University of Cambridge Computer Laboratory.
3. Daugman J and Downing C J (1995) "Demodulation, predictive coding, and spatial
vision." Journal of the Optical Society of America A, vol. 12, no. 4, pp 641 - 660.
4. Daugman J (1993) "High confidence visual recognition of persons by a test of
statistical independence." IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 15, no. 11, pp 1148 - 1160.
5. Daugman J (1985) "Uncertainty relation for resolution in space, spatial frequency, and
orientation optimized by two-dimensional visual cortical filters." Journal of the
Optical Society of America A, vol. 2, no. 7, pp 1160 - 1169.
6. . Sanderson, J. Erbetta. Authentication for secure environments based on iris scanning
technology. IEE Colloquium on Visual Biometrics, 2000.
7. J. Daugman. How iris recognition works. Proceedings of 2002 International
Conference on Image Processing, Vol. 1, 2002
8. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride. A
system for automated iris recognition. Proceedings IEEE Workshop on Applications
of Computer Vision, Sarasota, FL, pp. 121-128, 1994.
9. Y. Zhu, T. Tan, Y. Wang. Biometric personal identification based on iris patterns.
Proceedings of the 15th International Conference on Pattern Recognition, Spain, Vol.
2, 2000.
SOURCE LINKS
www. irisscan.com
www.c1.cam.ac.Uk/users/jgd1000
www.sensar.com