Non-Orthogonal View Iris Recognition System
Non-Orthogonal View Iris Recognition System
Abstract—This paper proposes a non-orthogonal view iris Wildes et al. also proposed another iris recognition system
recognition system comprising a new iris imaging module, an at almost the same time [6]. The accuracy of their system
iris segmentation module, an iris feature extraction module and
is also very high. However, its computation cost is very high
a classification module. A dual-charge-coupled device camera was
developed to capture four-spectral (red, green, blue, and near- and, hence, it is suitable for identity verification. Traditionally,
infrared) iris images which contain useful information for sim- the iris recognition was developed based on the orthogonal-
plifying the iris segmentation task. An intelligent random sample view assumption (OVA), i.e., the optical axis of the camera is
consensus iris segmentation method is proposed to robustly detect approximately aligned with the normal vector of the pupillary
iris boundaries in a four-spectral iris image. In order to match
circle. Traditional iris recognition methods usually comprise
iris images acquired at different off-axis angles, we propose a
circle rectification method to reduce the off-axis iris distortion. the following five main steps.
The rectification parameters are estimated using the detected
elliptical pupillary boundary. Furthermore, we propose a novel 1) Iris Image Acquisition: Iris images are usually acquired
iris descriptor which characterizes an iris pattern with multiscale using a near-infrared (NIR) camera, because the re-
step/ridge edge-type maps. The edge-type maps are extracted flectance of human iris is high at the NIR band. How-
with the derivative of Gaussian and the Laplacian of Gaussian
filters. The iris pattern classification is accomplished by edge-type
ever, if the iris color is not too dark, its texture can
matching which can be understood intuitively with the concept be revealed when using a visible light imaging system
of classifier ensembles. Experimental results show that the equal [7], [8]. Existing iris image acquisition systems can be
error rate of our approach is only 0.04% when recognizing iris separated into two types, depending on the degrees of
images acquired at different off-axis angles within ±30°. user cooperation they require. The first type of systems
Index Terms—Iris feature extraction, iris imaging system, iris requires intensive user cooperation. Users are asked to
recognition, iris segmentation, non-ideal iris recognition, non- adjust their head pose for the system to acquire an iris
orthogonal view iris recognition. image [1], [7], [9], [10]. Kalka et al. have proposed a
method to assess the quality of the acquired iris image
I. Introduction [11]. The image acquisition process should be repeated
DENTIFYING or verifying one’s identity using biometrics until the image is qualified. Since the size of a human
I is attracting considerable attention in these years. Biomet-
rics authentication uses information specific to a person, such
iris is very small, an iris camera is usually equipped
with a high-magnification lens. Unfortunately, the fact
as a fingerprint, face, utterance, or iris pattern. Therefore, it is that the depth of field of such a lens is small makes
more convenient and securer than the traditional authentication the acquired iris image sometimes not clear enough.
methods. Among all the biometrics authentication methods, Narayanswamy et al. have proposed a wavefront coded
iris recognition appears to be a very attracting method because imaging technique that uses an aspheric optical element
of its high recognition rate. and a digital decoder to increase the depth of field [12].
The first automatic iris recognition system was developed Also, image restoration techniques have been utilized
by Daugman [1]. His algorithm is accurate and efficient. to alleviate the influence of image blurness [13]. The
Large scale tests have been conducted and the results show second type of systems is able to control the orientation
that his system can achieve almost zero error rate [2]–[5]. and/or the zoom and focus settings of the camera to
acquire a satisfactory iris image [14]–[21]. Therefore,
Manuscript received September 26, 2008; revised April 24, 2009 and August systems of the second type require less user cooperation
20, 2009. First version published November 3, 2009; current version published and are more convenient than systems of the first type.
March 5, 2010. This work was supported in part by the National Science
Council, Taiwan, under Grant NSC 94-2213-E-260-004, and by the Ministry 2) Iris Region Segmentation: The goal of iris region seg-
of Economic Affairs, Taiwan, under Grant 97-EC-17-A-02-S1-032. This paper mentation is to detect the boundaries of the iris pattern
was recommended by Associate Editor S. Pankanti. in an input image. The iris boundaries include the pupil
C.-T. Chou, S.-W. Shih, and V. W. Cheng are with the Department of
Computer Science and Information Engineering, National Chi Nan Uni- (inner) boundary, the limbus (outer) boundary, and the
versity, Puli, Nantou 54561, Taiwan (e-mail: [email protected]; boundaries of the upper and the lower eyelids. Based on
[email protected]; [email protected]). the OVA, the inner and the outer boundaries of an iris
W.-S. Chen is with the Department of Electrical Engineering, National Chi
Nan University, Puli, Nantou 54561, Taiwan (e-mail: [email protected]). pattern can be approximately modeled as non-concentric
D.-Y. Chen is with the Department of Electrical Engineering, Yuan Ze circles [1]. The upper and the lower eyelid boundaries
University, Chungli 320, Taiwan (e-mail: [email protected]). are usually modeled as parabolas [4]. Popular methods
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. for detecting the circular and the parabolic bound-
Digital Object Identifier 10.1109/TCSVT.2009.2035849 aries can be classified into two categories. One is the
c 2010 IEEE
1051-8215/$26.00
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
418 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 419
projective transformations of iris images degrade the image the detected elliptical boundary to make the iris segmentation
quality, which will eventually reduce the system performance. easier and more precise.
Therefore, they proposed to enroll multi-iris images acquired Notably, iris segmentation is a critical problem in NOVA iris
at different off-axis angles. Experimental results showed that recognition. Since the traditional iris boundary detection rules
the multiview-enrollment approach achieved high accuracy. are no longer valid with NOVA, it is necessary to incorporate
In fact, the multiview enrollment can be avoided because more information, e.g., multispectral iris images, to achieve ro-
the off-axis angle can be estimated. Dorairaj et al. proposed bust iris segmentation results. Multispectral iris images can be
two methods using the Hamming distance and Daugman’s acquired using either the sequential acquisition technique [53]
IDO, respectively, as objective functions for estimating the or three-charge-coupled device (3CCD) cameras [54], [55],
off-axis angle [46]. However, both of their methods involve and the latter is preferred since the images can be acquired
an image transformation in the optimization process. Hence, simultaneously. Park et al. proposed to use multispectral in-
the computation cost is very high. Besides, both of the two frared iris images and a gradient-based image fusion algorithm
objective functions proposed in [46] are not differentiable, to detect counterfeit attacks [53]. Boyce et al. have studied
which complicate the optimization problem. Zuo et al. also the four-spectral iris analysis problem using a 3CCD camera
modified the IDO method to incorporate an elliptical iris [54], [55]. Their goal is to explore how multispectral iris
boundary model for segmenting non-frontal irises [47]. How- images can be used to improve the iris recognition accuracy.
ever, their method is not very efficient due to the elliptical They proposed an OVA iris segmentation method to roughly
IDO computation. Also, because no extra RGB information locate the iris boundaries using a four-spectral iris image.
is available, the accuracy of their method is subject to the The four-spectral vectors of an iris pixel and a non-iris pixel
influence of non-iris noises. Yahya et al. proposed to model are modeled as two Gaussian distributions, respectively, so
the pupil boundary as an elliptical curve, but their purpose is the iris area can be classified using the Bayes classifier [55].
not to compensate the off-axis distortion because the outer iris Post-processing of the resulting iris map using morphological
boundary model is assumed circular in the same work [48]. operations and the median filter is required to remove miss-
Daugman has recently proposed a Fourier-based method de- classified pixels. The outer iris boundary is determined using
veloped under the weak-perspective assumption for correcting only two points selected from the border of the filtered iris
the NOVA iris image distortion [49]. Consider an elliptical area. Since each pixel of the entire image has to be classified,
curve {(A cos (t) , B sin (t)) |t ∈ [0, 2π] } centered at the origin the efficiency of their method is low. Furthermore, according
and lying along the x-axis, where A and B are the semimajor to their results, the main advantage of using the RGB iris
and the semiminor axes, respectively. When A/B is close to image is to improve the robustness of iris segmentation results
unity, the parameter t is approximately equal to the polar rather than the recognition accuracy. This result is expectable
angle θ = tan−1 (B tan(t)/A) and the coordinates of the elliptic since it is very difficult to improve the near 100% accuracy of
curve (A cos (t) , B sin (t)) can be approximately represented the state-of-the-art NIR iris recognition system. However, if
as (A cos (θ) , B sin (θ)). Since the x and the y-coordinates are the main purpose of using additional RGB channels is to
sinusoidal functions of θ, parameters A and B can be estimated improve the segmentation accuracy, then it is not necessary
by computing the amplitudes of the sinusoidal functions using to obtain high definition images in any of the RGB channels.
the discrete Fourier transform of the x and y-coordinates, Therefore, the number of CCD chips can be reduced and
respectively. This method is efficient and can be applied even they do not have to be precisely aligned, which implies
when the pupil is not in circular shape. However, when the that a four-spectral iris imaging device that is simpler and
aspect ratio A/B deviates from unity, the estimated aspect more cost-effective than the 3CCD solution can be devel-
ratio will become less accurate. Ross and Shah proposed a oped.
level-set method for segmenting the iris area [50]. The level-
set method can effectively deal with problems of local minima
B. Contributions of This Paper
in finding iris boundary but its computation cost is very
high. Pundlik et al. proposed a non-ideal iris segmentation This paper proposes a NOVA iris recognition method. The
method based on the graph cut algorithm [51]. Since the iris contributions of this paper are summarized in the following.
area segmented using the graph cut algorithm may have an 1) A dual-CCD (DCCD) camera is developed which cap-
irregular shape, they use a fix-orientation elliptical iris model tures four-spectral iris images to simplify the NOVA iris
to estimate the iris boundary. Thus, their method can deal segmentation problem.
with the NOVA iris images in the ICE database, but can not 2) A simple and efficient method to estimate the circle
handle the general NOVA images. Vatsa et al. proposed a rectification parameters is proposed for rectifying a
two stage iris segmentation method combining an IDO-like NOVA iris image into an OVA one using the detected
elliptical boundary detection stage followed by a level-set fine pupil boundary.
tuning stage which is able to detect non-circular iris boundaries 3) A new iris segmentation method using the four-spectral
[52]. They also proposed a method to improve the iris image iris images is proposed. Experimental results show that
quality and the recognition performance. However, in their the new segmentation method is more robust and more
method, the segmented iris image is directly converted into accurate than both the IDO approach [1] and the HT
a polar coordinate system without considering the off-axis approach [9], and improving the iris segmentation accu-
image distortion. In Section III-B, we will show how to use racy is crucial to NOVA iris recognition.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
420 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 421
B. Circle Rectification
From the coefficients of (1), the following geometrical
parameters of the pupil ellipse can be determined:
1) the ellipse center (xe , ye );
2) the major axis defined by its length ra and a unit
orientation vector ax , ay ;
3) the minor axis defined by its length rb and a unit
orientation vector bx , by .
Based on the weak-perspective assumption, an affine trans-
formation determined by using the above elliptic parameters
can be applied to the iris image such that the rectified inner
Fig. 6. Left two columns show the original iris images. The right two boundary approaches a circle. We call this image mapping the
columns show the rectified four-spectral iris images.
circle rectification process. The circle rectification process is
useful for both OVA and NOVA iris images. For OVA iris
III. Iris Segmentation With Four-Spectral Iris images, the circle rectification process can unify the pixel
Images aspect ratio so iris images acquired from different cameras can
be correctly processed. For NOVA iris images, determining the
A. Pupil Boundary Detection
outer iris boundary in a circle-rectified image can be simplified
In general, the distance between the iris and the camera is as a circle detection problem with fewer parameters than the
much greater than the radius of the iris and thus the weak- original ellipse detection problem. One way to perform circle
perspective projection is a reasonable camera model. Since rectification is to rotate the input image such that the major and
the inner boundary is approximately circular, either NOVA or the minor axes align with the vertical and the horizontal axes,
non-square pixels will make the acquired pupil approximately respectively. The rotated image is then stretched horizontally
elliptic. Therefore, an inner iris boundary can be modeled as to make the minor axis as long as the major one. Finally, the
an elliptic curve. Notably, the difficulty of pupil detection is image is rotated back to its original orientation. The above
determined by the placement of light sources because the glints three-step circle rectification process can be represented as an
in the cornea area will usually complicate the pupil detection affine transformation matrix given by
problem. This paper uses an off-axis NIR light source (SONY
HVL-IRH2) placed below the camera to produce the dark T = Tr Ts T−1
r (2)
pupil effect, as is shown in the first column of Fig. 6. The light
where
source placement depresses the NIR glint (refer to Fig. 6) and ⎡ ⎤
greatly simplifies the pupil detection problem. Our method ax bx xe
for determining the pupil boundary is summarized in the Tr = ⎣ ay by ye ⎦ (3)
following. 0 0 1
1) Determine a threshold value for segmenting the dark and
pupil using a method modified from [56]. A moving ⎡
1 0 0
⎤
average operation is applied to the NIR image and the Ts = ⎣ 0 ra
0 ⎦. (4)
rb
lowest intensity value, denoted as µs , of the filtered
0 0 1
result is recorded which represents the average intensity
of the darkest sub-image block. The threshold value is The third and the fourth columns of Fig. 6 show the
determined as tp = µs +µ 2
w
, where µw is the average results of the circle rectification process, while the first two
intensity of the whole image. A binary image can be columns are the original images. The rectified results verify
computed from the NIR image using the threshold tp . the adequacy of the weak-perspective assumption.
2) Pupil area candidates are computed using the connected
component analysis technique. Dark areas which are too C. Intelligent Random Sample Consensus (RANSAC) Iris
large or too small are discarded. For the remaining dark Segmentation Method
areas, only the one closest to the image center is selected Most of the traditional iris segmentation methods use edge
as the final pupil candidate. orientations to classify an image edge pixel as limbus bound-
3) The ellipse fitting algorithm proposed by Fitzgibbon ary or eyelid boundary and, thus, they are not suitable for
et al. is used to estimate the coefficients of the following processing the NOVA iris images. In this paper, we propose to
quadratic equation using the boundary of the final pupil use four-spectral measurements rather than edge orientations
candidate [57] to classify different types of iris boundaries. In general, human
skin, iris, and sclera possess very different spectral-reflectance
ax2 + 2bxy + cy2 + 2dx + 2ey + f = 0. (1) properties. For example, the human skin (independent of
race) has relatively high reflectance for the red light when
If the ellipse fitting error is less than a threshold, then compared with the reflectance for the blue and the green light
the dark area is presumed as the pupil area. [58]. The human iris has high reflectance for near infrared
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
422 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 423
edge or a ridge edge. Therefore, this paper proposes to two to three, and the other is to use a mask flag to ignore the
characterize an iris pattern using an edge-type descriptor [63]. computation results at this location. While the former approach
Since the uncertainty of the estimated outer iris boundary is is very attractive because it can encode the flat region of the
large due to noise and eyelid occlusion, the vertical location iris pattern, increasing the quantization levels will make the
of a horizontal edge detected in a normalized iris image is edge-type detection results sensitive to the contrast of the input
not stable. Therefore, only vertical edges are considered in iris image which, however, is undesirable. Hence, the second
this paper. The edge-type flag of each site on an iris can be solution is preferable and a mask flag is inducted for each of
determined with the DoG and the LoG filters given by the Boolean variables. Thus, the ℓ-th level
edge-type descriptor
2 2 can be expressed as Sℓ , MSℓ , Rℓ , MRℓ , where MSℓ and MRℓ
∂ −π[ 2αφ 2 + 2βρ 2 ] ℓ
DoG (ρ, φ) = e ℓ ℓ (5) are the validity maps of Sℓ and Rℓ , respectively. If too many
∂φ invalid pixels are involved in the convolution computation
and at (ρ, φ), then both MSℓ (ρ, φ) and MRℓ (ρ, φ) should also be
2
∂2 −π[ 2αφ 2 + 2βρ 2 ]
2 cleared.
LoGℓ (ρ, φ) = − e ℓ ℓ (6) Let nℓ denote the number of different scale levels. The com-
∂φ2 plete edge-type descriptor of an iris pattern can be represented
where αℓ and βℓ determine the width and the height of as {F, M}, where the edge-type flag F and the mask M
the filters, respectively. The negative sign on the right-hand are defined as follows:
side of (6) is introduced such that the filtered response of a
F = Sℓ , Rℓ |ℓ = 1, ..., nℓ
positive ridge edge is also positive. To obtain multiscale edge (7)
information, αℓ is set to αℓ = 2ℓ α0 where ℓ is an integer and
representing the scale level, and α0 is a design parameter.
MSℓ , MRℓ |ℓ = 1, ..., nℓ .
Conversely, βℓ controls the smoothness of the filtered output M= (8)
in the radial direction. This parameter can be set to 2ℓ β0 , For convenience, the complete edge-type descriptor is referred
where β0 is another design parameter. The genetic algorithm to as the ETCode hereafter.
is used to design α0 and β0 using a set of training iris images
[63]. Since the LoG filter is separable, the 2-D filtering can B. Iris Recognition Using ETCodes
be decomposed into two 1-D filtering passes. In pass one, the
Each bit of the ETCode, Sℓ (ρ, φ) /Rℓ (ρ, φ), can be used
input image is smoothed along the radial direction, and in pass
to design a weak classifier which categorizes people in the
two, variation along the angular direction of the smoothed
world into two groups—one group in which the people have
image is calculated with a 1-D LoG (Mexican hat) filter.
a step/ridge edge at (ρ, φ) in their iris, while the people in
Notably, the multiscale 1-D LoG filters form a non-orthogonal
the other group do not. Although the recognition rate of the
continuous wavelet basis [64], which have been widely used in
weak classifier is not high, the accuracy can be improved by
many computer vision applications. However, due to memory
incorporating more and more such weak classifiers. This is
and computation time limitations, only a few levels of the LoG
a very popular technique known as classifier ensembles [65].
filters can be used to extract iris features. Since a set of finite
The use of edge features for iris recognition was first proposed
LoG filters cannot form a complete basis, the DoG filters are
by Daugman [1]. His IrisCode uses the phase response of
adopted to provide supplementary information.
Gabor filters which can encode the edge features. Also, Sun
The edge-type flag is calculated by convolving the input
et al. proposed to use the ordinal filters to encode the edge
image with the LoG/DoG filters. Let Dℓ (ρ, φ) = DoGℓ (ρ, φ)∗
features [33]. However, both the Gabor filter and the ordinal
N (ρ, φ) and Lℓ (ρ, φ) = LoGℓ (ρ, φ)∗N (ρ, φ). The superscript
filter contain more design parameters than the LoG/DoG
ℓ denotes the scale level of the filters. Because both DoG
filters proposed in this paper. Therefore, designing optimal
and LoG are band-pass filters, high frequency components of
parameters using either the Gabor filter or the ordinal filter
an image are filtered out. Thus, the processed image can be
is more difficult than designing optimal LoG/DoG filters.
sub-sampled without sacrificing any information due to the
The benefit of our approach is that the edge-type feature is
sampling theorem. If the dimensions of the sub-sampled first
very simple and it can achieve a recognition rate which is
level edge intensity maps, i.e., D1 and L1 , are H1 ×W1 , then the
comparable to that of the state-of-the-art methods.
sub-sampled ℓth level edge intensity maps will be H1 /βℓ /β1 ×
Assume that 1 is an ETCode enrolled to the system and
W1 /αℓ /α1 .
that 2 is an input ETCode for identification. The distance
When Dℓ (ρ, φ) > 0 Lℓ (ρ, φ) > 0 , the possibility that an
between 1 and 2 is defined as the following normalized
ℓth level step (ridge) edge located at (ρ, φ) is high. ℓ There-
Hamming distance:
fore, we use two binary variables S ℓ
(ρ, φ) u D (ρ, φ)
|(F1 ⊕ F2 ) · (M1 · M2 )|
and Rℓ (ρ, φ) u Lℓ (ρ, φ) to record the computed edge-
D( 1 , 2 ) (9)
type flags, where u (·) is the unit step function. When the |M1 · M2 |
image
ℓ intensity
around
(ρ, φ) is approximately constant, both where ⊕ and · are the bitwise-XOR and the bitwise-AND
D (ρ, φ) and Lℓ (ρ, φ) are close to zero. In this case, operation, respectively. When D( 1 , 2 ) is smaller than a
the computed edge-type flags will be sensitive to noise. Two threshold, 1 and 2 are classified as the same iris class.
approaches can be used to remedy this problem. One is to The normalized Hamming distance reflects the proportion of
increase the quantization levels of Dℓ (ρ, φ) and Lℓ (ρ, φ) from weak classifiers, indicating that the two ETCodes do not
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
424 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
belong to the same class. However, since (9) does not consider TABLE I
the effect of the in-plane rotation of the eye, the matching EER (%) of the CASIA-V3-Interval and UBIRIS-V1
result may be incorrect. When the in-plane rotation angle
is ψ, the normalized iris image and its corresponding mask LoG/DoG Gabor
image can be represented as N (ρ, φ + ψ) and NM (ρ, φ + ψ), CASIA-V3-Interval 0.031 0.087
˜ ψ. UBIRIS-V1 0.258 0.244
respectively. Let the resulting ETCode be denoted by
˜ ψ
Notably, can be predicted by rotating the edge-type de-
scriptors
1 1 of 1 horizontally such that if the level-one descriptor,
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 425
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
426 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
TABLE II
EER (%) of the Ten Configurations Evaluated at θh = 0°
Segmentation Methods
Filter Banks IRIS-4 IDO-4 IDO-N HT-4 HT-N
LoG/DoG 0 0 0 0.00 0.00
Gabor 0 0 0.00 0.01 0
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 427
Fig. 17. FNMRs the FMRs with respect to different off-axis angles com-
puted using five iris segmentation methods and two filter banks. (a) FNMR
and (c) FMR of the LoG/DoG filter bank. (b) FNMR and (d) FMR of the
Gabor filter bank.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
428 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
CHOU et al.: NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM 429
[4] J. Daugman, “Statistical richness of visual phase information: Update [30] W. W. Boles and B. Boashash, “A human identification technique using
on recognizing persons by iris patterns,” Int. J. Comput. Vision, vol. 45, images of the iris and wavelet transform,” IEEE Trans. Signal Process.,
no. 1, pp. 25–38, Oct. 2001. vol. 46, no. 4, pp. 1185–1188, Apr. 1998.
[5] J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. [31] S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient iris recognition through
Video Technol., vol. 14, no. 1, pp. 21–30, 2004. improvement of feature vector and classifier,” Electron. Telecommun.
[6] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczyn- Res. Inst. J., vol. 23, no. 2, pp. 61–70, 2001.
ski, J. R. Matey, and S. E. McBride, “A system for automated iris [32] C. Tisse, L. Martin, L. Torres, and M. Robert, “Person identification
recognition,” in Proc. IEEE Workshop Applicat. Comput. Vision, 1994, technique using human iris recognition,” in Proc. Int. Conf. Vision
pp. 121–128. Interface, 2002, pp. 294–299.
[7] M. Dobeš, L. Machala, P. Tichavský, and J. Pospı́šil, “Human eye iris [33] Z. Sun, T. Tan, and Y. Wang, “Robust encoding of local ordi-
recognition using the mutual information,” Int. J. Light Electrons Optics, nal measures: A general framework of iris recognition,” in Proc.
vol. 115, no. 9, pp. 399–405, 2004. Eur. Conf. Comput. Vision Workshop Biometric Authentication, 2004,
[8] H. Proença and L. A. Alexandre, “UBIRIS: A noisy iris image pp. 270–282.
database,” in Proc. Int. Conf. Image Anal. Process., 2005, pp. 970– [34] Z. He, Z. Sun, T. Tan, X. Qiu, C. Zhong, and W. Dong, “Boosting
977. ordinal features for accurate and fast iris recognition,” in Proc. IEEE
[9] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Comput. Soc. Conf. Comput. Vision Pattern Recognition, 2008, pp. 1–8.
Proc. IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997. [35] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by
[10] Y. He, Y. Wang, and T. Tan, “Iris image capture system design for characterizing key local variations,” IEEE Trans. Image Process., vol. 13,
personal identification,” in Proc. Sinobiometrics, LNCS 3338. 2004, no. 6, pp. 739–750, 2004.
pp. 539–545. [36] L. Ma, T. Tan, D. Zhang, and Y. Wang, “Local intensity variation
[11] N. D. Kalka, J. Zuo, N. A. Schmid, and B. Cukic, “Image quality analysis for iris recognition,” Pattern Recognition, vol. 37, no. 6,
assessment for iris biometric,” in Proc. 3rd Int. Soc. Optical Engineers pp. 1287–1298, 2004.
Conf. Biometric Technol. Human Identification, vol. 6202. 2006, pp. [37] Z. Sun, Y. Wang, T. Tan, and J. Cui, “Robust direction estimation of
62 020D1–62 020D11. gradient vector field for iris recognition,” in Proc. Int. Assoc. Pattern
[12] R. Narayanswamy, G. E. Johnson, P. E. X. Silveira, and H. B. Wach, Recognition Int. Conf. Pattern Recognition, vol. 2. 2004, pp. 783–786.
“Extending the imaging volume for biometric iris recognition,” Appl. [38] Z. Sun, Y. Wang, T. Tan, and J. Cui, “Improving iris recognition accuracy
Optics, vol. 44, no. 5, pp. 701–712, 2005. via cascaded classifiers,” IEEE Trans. Syst. Man Cybern. C, Appl. Rev.,
[13] B. J. Kang and K. R. Park, “Real-time image restoration for iris recog- vol. 35, no. 3, pp. 435–441, Aug. 2005.
nition systems,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 37, [39] Y.-P. Huang, S.-W. Luo, and E.-Y. Chen, “An efficient iris recognition
no. 6, pp. 1555–1566, 2007. system,” in Proc. Int. Conf. Mach. Learning Cybern., vol. 1. 2002,
[14] K. R. Park, “New automated iris image acquisition method,” Appl. pp. 450–454.
Optics, vol. 44, no. 5, pp. 713–734, 2005. [40] S.-I. Noh, K. Bae, K. R. Park, and J. Kim, “A new iris recognition
[15] K. R. Park and J. Kim, “A real-time focusing algorithm for iris method using independent component analysis,” Inst. Electron. Inf.
recognition camera,” IEEE Trans. Syst. Man Cybern. C, Appl. Rev., Commun. Engineers Trans. Inf. Syst., vol. E88-D, no. 11, pp. 2573–
vol. 35, no. 3, pp. 441–444, Aug. 2005. 2581, 2005.
[16] X. Liu, K. W. Bowyer, and P. J. Flynn, “Experiments with an improved [41] D. M. Monro, S. Rakshit, and D. Zhang, “DCT-based iris recognition,”
iris segmentation algorithm,” in Proc. IEEE Workshop Autom. Identifi- IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 586–595,
cation Advanced Technol., 2005, pp. 118–123. Apr. 2007.
[17] LG Iris: Iris Recognition Technology [Online]. Available: [42] Iris Challenge Evaluation [Online]. Available: http://iris.nist.gov/ice
http://www.lgiris.com [43] R. M. Gaunt, B. L. Bonney, R. W. Ives, D. M. Etter, and R. C.
[18] Panasonic Security Products: Biometrics [Online]. Available: Schultz, “Collection and segmentation of non-orthogonal irises for iris
http://www.panasonic.com/business/security/products/biometrics.asp recognition,” in Proc. Biometric Consortium Conf., 2005 [Online]. Avail-
[19] S. Itoda, M. Kikuchi, and S. Wada, “Fully automated image capturing- able: http://www.biometrics.org/bc2005/Presentations/Conference/1%
type iris recognition device IRISPASSR-M,” Oki Tech. Review, vol. 73, 20Monday%20September%2019/Poster%20Session/Gaunt− 1568964530.
no. 205, pp. 48–51, 2006. pdf
[20] J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. J. LoIacono, [44] B. Bonney, R. Ives, D. Etter, and Y. Du, “Iris pattern extraction using
S. Mangru, M. Tinker, T. M. Zappia, and W. Y. Zhao, “Iris on the bit-plane analysis,” in Proc. IEEE Annu. Asilomar Conf. Signals Syst.
move: Acquisition of images for iris recognition in less constrained Comput., vol. 1. 2004, pp. 582–586.
environments,” Proc. IEEE, vol. 94, no. 11, pp. 1936–1947, Nov. [45] A. Abhyankar, L. Hornak, and S. Schuckers, “Off-angle iris recognition
2006. using bi-orthogonal wavelet network system,” in Proc. IEEE Workshop
[21] X. He, J. Yan, G. Chen, and P. Shi, “Contactless autofeedback iris Autom. Identification Advanced Technol., 2005, pp. 239–244.
capture design,” IEEE Trans. Instrum. Meas., vol. 57, no. 7, pp. 1369– [46] V. Dorairaj, N. A. Schmid, and G. Fahmy, “Performance evaluation
1375, Jul. 2008. of non-ideal iris based recognition system implementing global ICA
[22] C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection encoding,” in Proc. IEEE Int. Conf. Image Process., vol. 3. 2005,
and tracking using multiple light sources,” Image Vision Computing, pp. 285–288.
vol. 18, no. 4, pp. 331–335, 2000. [47] J. Zuo, N. D. Kalka, and N. A. Schmid, “A robust iris segmentation
[23] C. H. Morimoto, T. T. Santos, and A. S. Muniz, “Automatic iris procedure for unconstrained subject presentation,” in Proc. Biometrics
segmentation using active near infra red lighting,” in Proc. Brazilian Symp. Special Session Research Biometric Consortium Conf., 2006, pp.
Symp. Comput. Graph. Image Process., 2005, pp. 37–43. 1–6.
[24] J. Thornton, M. Savvides, and B. V. Kumar, “A Bayesian approach to [48] A. E. Yahya and M. J. Nordin, “A new technique for iris localization
deformed pattern matching of iris images,” IEEE Trans. Pattern Anal. in iris recognition systems,” Information Technol. J., vol. 7, no. 6,
Mach. Intell., vol. 29, no. 4, pp. 596–606, Apr. 2007. pp. 924–929, 2008.
[25] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on [49] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst. Man
iris pattern,” in Proc. Int. Assoc. Pattern Recognition Int. Conf. Pattern Cybern. B, Cybern., vol. 37, no. 5, pp. 1167–1175, Oct. 2007.
Recognition, vol. 12. 2000, pp. 805–808. [50] A. Ross and S. Shah, “Segmenting non-ideal irises using geodesic active
[26] L. Ma, Y. Wang, and T. Tan, “Iris recognition based on multichannel contours,” in Proc. Biometrics Symp. Special Session Res. Biometric
Gabor filtering,” in Proc. Asian Conf. Comput. Vision, vol. 1. 2002, Consortium Conf., 2006, pp. 1–6.
pp. 279–283. [51] S. J. Pundlik, D. L. Woodard, and S. T. Birchfield, “Non-ideal iris
[27] L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric segmentation using graph cuts,” in Proc. IEEE Comput. Soc. Conf.
filters,” in Proc. Int. Assoc. Pattern Recognition Int. Conf. Pattern Comput. Vision Pattern Recognition Workshop, 2008, pp. 1–6.
Recognition, vol. 2. 2002, pp. 414–417. [52] M. Vatsa, R. Singh, and A. Noore, “Improving iris recognition per-
[28] L. Masek and P. Kovesi, “Recognition of human iris patterns for formance using segmentation, quality enhancement, match score fusion,
biometric identification,” B.E. thesis, School Comput. Sci. Softw. Eng., and indexing,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 38, no. 4,
Univ. Western Australia, Perth, Australia, 2003. pp. 1021–1035, 2008.
[29] J. Huang, L. Ma, Y. Wang, and T. Tan, “Iris model based on local [53] J. H. Park and M. G. Kang, “Iris recognition against counterfeit attack
orientation description,” in Proc. Asian Conf. Comput. Vision, 2004, using gradient based fusion of multispectral images,” in Proc. Int.
pp. 954–959. Workshop Biometric Recognition Syst., 2005, pp. 150–156.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.
430 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 20, NO. 3, MARCH 2010
[54] C. Boyce, A. Ross, M. Monaco, L. Hornak, and X. Li, “Multispec- Sheng-Wen Shih (M’97) received the M.S.
tral iris analysis: A preliminary study,” in Proc. IEEE Comput. Soc. and Ph.D. degrees in electrical engineering from
Conf. Comput. Vision Pattern Recognition Workshop, 2006, pp. 51– National Taiwan University, Taipei City, Taiwan, in
59. 1990 and 1996, respectively.
[55] C. K. Boyce, “Multispectral iris recognition analysis: Techniques and From January to July 1997, he was a Postdoctoral
evaluation,” Master’s thesis, Dept. Comput. Sci. Electr. Eng., West Associate with the Image Formation and Processing
Virginia Univ., Morgantown, WV, Dec. 2006. Laboratory, Beckman Institute, University of Illinois,
[56] A. Basit and M. Y. Javed, “Localization of iris in gray scale images using Ubana-Champaign. In August 1997, he was with
intensity gradient,” Optics Lasers Eng., vol. 45, no. 12, pp. 1107–1114, the Department of Computer Science and Informa-
Dec. 2007. tion Engineering, National Chi Nan University, Puli,
[57] A. W. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting Nantou, Taiwan as an Assistant Professor. He be-
of ellipse,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 5, came an Associate Professor in 2003. His research interests include computer
pp. 476–480, May 1999. vision, biometrics, and human–computer interaction.
[58] E. Angelopoulou, “The reflectance spectrum of human skin,” Dept.
Comput. Information Sci., Univ. Pennsylvania, Philadelphia, PA, Tech.
Rep. MS-CIS-99-29, 1999. Wen-Shiung Chen received the M.S. degree from
[59] M. Vilaseca, R. Mercadal, J. Pujol, M. Arjona, M. de Lasarte, R. Huertas, National Taiwan University, Taipei City, Taiwan, in
M. Melgosa, and F. H. Imai, “Characterization of the human iris spectral 1985, and the Ph.D. degree from the University of
reflectance with a multispectral imaging system,” Appl. Optics, vol. 47, Southern California, Los Angeles, in 1994, both in
no. 30, pp. 5622–5630, 2008. electrical engineering.
[60] A. Vogel, C. Dlugos, R. Nuffer, and R. Birngruber, “Optical properties of He was with the Department of Electrical Engi-
human sclera, and their consequences for transscleral laser applications,” neering, Feng Chia University, Taichung, Taiwan
Lasers Surgery Med., vol. 11, no. 4, pp. 313–394, 1991. from 1994 to 2000. In 2000, he joined the De-
[61] M. A. Fischler and R. C. Bolles, “Random sample consensus: A partment of Electrical Engineering, National Chi
paradigm for model fitting with applications to image analysis and Nan University, Puli, Nantou, Taiwan, where he is
automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, currently a Professor. His research interests include
1981. image processing, pattern recognition, computer vision, biometrics, mobile
[62] J. Canny, “A computational approach to edge detection,” IEEE Trans. computing, and networking.
Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679–698, Nov. 1986.
[63] C.-T. Chou, S.-W. Shih, W.-S. Chen, and V. W. Cheng, “Iris recogni-
tion with multiscale edge-type matching,” in Proc. Int. Assoc. Pattern
Victor W. Cheng received the B.S. degree in elec-
Recognition Int. Conf. Pattern Recognition, vol. 4. 2006, pp. 545–548.
trical engineering from National Taiwan University,
[64] I. Daubechies, “Discrete wavelet transforms: Frames,” in Ten Lectures
Taipei City, Taiwan, in 1989, and the M.S. and
on Wavelets. Philadelphia, PA: Society for Industrial and Applied
Ph.D. degrees, both in electrical engineering and
Mathematics (SIAM), 1992, ch. 3, p. 75.
computer science, from the University of Michigan,
[65] D. Opitz and R. Maclin, “Popular ensemble methods: An empirical
Ann Arbor, in 1995 and 2000, respectively.
study,” J. Artif. Intell. Res., vol. 11, pp. 169–198, Aug. 1999.
He is currently a Faculty Member with the Depart-
[66] A. J. Mansfield and J. L. Wayman, “Best practices in testing and report-
ment of Computer Science and Information Engi-
ing performance of biometric devices,” Center Math. Sci. Computing,
neering, National Chi Nan University, Puli, Nantou,
Nat. Phys. Lab., Teddington, Middlesex, U.K., Tech. Rep. NPL CMSC
Taiwan. His research interests include sensor net-
14/02, 2002.
working, error correcting code, code division multi-
[67] CASIA Iris Image Database, Center for Biometrics and Security Re-
ple access, and 3-D surface scanning.
search, National Laboratory of Pattern Recognition, Institute of Au-
tomation, Chinese Academy of Sciences, China [Online]. Available:
http://www.sinobiometrics.com
[68] UBIRIS Noisy Visible Wavelength Iris Image Databases, Soft Computing Duan-Yu Chen received the B.S. degree in com-
and Image Analysis Group, Department of Computer Science, University puter science and information engineering from
of Beira Interior, Portugal [Online]. Available: http://iris.di.ubi.pt National Chaio-Tung University, Hsinchu, Taiwan,
[69] C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learning, in 1996, the M.S. degree in computer science
vol. 20, no. 3, pp. 273–297, 1995. from National Sun Yat-Sen University, Kaohsiung,
[70] R.-E. Fan, P.-H. Chen, and C.-J. Lin, “Working set selection using Taiwan, in 1998, and the Ph.D. degree in computer
second order information for training SVM,” J. Mach. Learning Res., science and information engineering from National
vol. 6, pp. 1889–1918, Dec. 2005. Chiao-Tung University, in 2004.
[71] H. Proença, S. Filipe, R. Santos, J. Oliveira, and L. A. Alexandre, “The He was a Postdoctoral Research Fellow with
UBIRIS.v2: A database of visible wavelength iris images captured on- Academia Sinica, Taipei, Taiwan from 2004 to 2008.
the-move and at-a-distance,” IEEE Trans. Pattern Anal. Mach. Intell., He is currently an Assistant Professor with the
2009, submitted for publication. Department of Electrical Engineering, Yuan Ze University, Chungli, Taiwan.
His research interests include computer vision, video signal processing,
content-based video indexing and retrieval, and multimedia information
systems.
Chia-Te Chou received the B.S. and M.S. degrees
in computer engineering in 2003 and 2005, re-
spectively, from the National Chi Nan University,
Nantou, Taiwan, R.O.C., where he is currently pur-
suing the Ph.D. degree in computer science and
engineering at the Department of Computer Science
and Information Engineering.
His research interests include biometrics and
human–computer interaction.
Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:4653 UTC from IE Xplore. Restricon aply.