0% found this document useful (0 votes)
94 views

Deepirisnet: Deep Iris Representation With Applications in Iris Recognition and Cross-Sensor Iris Recognition

This document proposes DeepIrisNet, a deep convolutional neural network for iris recognition and cross-sensor iris matching. DeepIrisNet uses a very deep architecture inspired by recent successful CNNs, with convolutional layers, batch normalization, dropout, and max pooling. Experimental results show DeepIrisNet achieves state-of-the-art accuracy for iris recognition and significantly improves cross-sensor recognition accuracy compared to previous methods. The deep learning approach provides a robust, discriminative, and compact iris representation that handles large-scale iris data with complex distributions.

Uploaded by

pedro
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Deepirisnet: Deep Iris Representation With Applications in Iris Recognition and Cross-Sensor Iris Recognition

This document proposes DeepIrisNet, a deep convolutional neural network for iris recognition and cross-sensor iris matching. DeepIrisNet uses a very deep architecture inspired by recent successful CNNs, with convolutional layers, batch normalization, dropout, and max pooling. Experimental results show DeepIrisNet achieves state-of-the-art accuracy for iris recognition and significantly improves cross-sensor recognition accuracy compared to previous methods. The deep learning approach provides a robust, discriminative, and compact iris representation that handles large-scale iris data with complex distributions.

Uploaded by

pedro
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

DeepIrisNet: DEEP IRIS REPRESENTATION WITH APPLICATIONS IN IRIS

RECOGNITION AND CROSS-SENSOR IRIS RECOGNITION

Abhishek Gangwar, Akanksha Joshi

Centre for Development of Advanced Computing (CDAC), India


{abhishek, akanksha}@cdac.in

ABSTRACT cross-sensor iris matching [25,28]. Few researchers have


addressed this issue and proposed some solutions [24,26,
Despite significant advances in iris recognition (IR), the 27]. Many ongoing researches in IR are aiming at to obtain
efficient and robust IR at scale and in non-ideal conditions effective feature representations to reduce intra-personal
presents serious performance issues and is still ongoing differences while maximizing inter-personal variations.
research topic. Deep Convolution Neural Networks (DCNN) Most of the previous works for iris representation are based
are powerful visual models that have reported state-of-the- on hand-crafted features [1,3,6,10]. Although, some
art performance in several domains. In this paper, we learning-based approaches [11-14] have also been explored,
propose deep learning based method termed as DeepIrisNet but they are based on shallow architectures.
for iris representation. The proposed approach bases on very Recent progress in deep learning, in particular
deep architecture and various tricks from recent successful Convolutional Neural Networks (CNNs) [8,17] have taken
CNNs. Experimental analysis reveal that proposed the computer vision community by storm and in many
DeepIrisNet can model the micro-structures of iris very applications the state-of-the-art accuracies have been
effectively and provides robust, discriminative, compact, significantly advanced by the emergence of deep learning.
and very easy-to-implement iris representation that obtains The key reasons behind such highly encouraging
state-of-the-art accuracy. Furthermore, we evaluate our iris performance of convolutional neural networks are their
representation for cross-sensor IR. The experimental results superb learning capability, availability of large data sets for
demonstrate that DeepIrisNet models obtain a significant training, improved algorithms and improved network
improvement in cross-sensor recognition accuracy too. architectures. Motivated by the successes of deep networks
in other applications, in this paper, we propose an efficient
Index Terms— CNN, iris recognition, cross-sensor iris deep CNN architecture named DeepIrisNet which can
recognition, deep iris representation, deep learning handle large-scale iris data with complex distributions. Deep
learning based approaches in iris biometrics, are recently
1. INTRODUCTION explored in few works [4,30,31], but their optimization
objectives are not directly related to iris identity and also
The first complete and automated iris recognition system their nets are significantly less deep than ours. To our best
was presented by Daugman [1] in 1993. Over the past few knowledge, this is the first utilization of DCNN based
years, with an increasing emphasis on security, iris approach for iris representation and recognition. The
recognition (IR) has gained a lot of prominence in person DeepIrisNet is designed carefully for better utilization of
recognition. Currently, it is being used in various large scale computing resources and optimal iris representation. It
deployments such as Aadhar in India, Amsterdam’s integrates the most popular components from more recent
Schiphol airport, US/Canadian borders etc. Though, a lot of successful CNNs, such as dropout learning [20], small filter
progress has been made in development of highly improved size [19], very deep architecture [16,18,19], rectified linear
IR, but still in practical applications, IR faces some non-linearity (ReLU) [21], batch normalization [23] etc.. As
challenges, especially if image acquisition is not constrained a result, we come up with significantly more accurate
[5,7]. The non-ideal iris images, captured in less constrained DCNN architecture, which not only achieves the state-of-
conditions normally degrade IR performance because of the-art IR accuracy, but also generalizes well on different
improper iris segmentation or low quality of iris texture iris datasets. In addition, the experiments also demonstrate
[5,9]. Another challenging issue with large scale IR that deeply learned iris pattern (DLIP) using proposed
deployments is that many times enrollment is done from one DeepIrisNet network is robust to sensor interoperability,
sensor and authentication is performed using a different small segmentation and transformation variations.
sensor. The interoperability is also desired in large scale IR, The rest of the paper is organized as follows. Section 2
because the stored templates are used for very long time and explains architecture and other details of proposed DeepIris-
system should support sensor upgradation. In some studies, Net. Details about experiments and datasets used are given
it is observed that IR performance is degraded in case of in section 3 and conclusion in section 4.

‹,(((  ,&,3


2. DeepIrisNet ARCHITECTURE AND TRAINING TABLE 1: Architecture of DeepIrisNet-A
Name Type/Kernel/Stride Output size #params
In this section, we provide generic layout of our two best Conv1 Convolution /5×5/1 124x124x32 0.8k
performing CNN architectures; DeepIrisNet-A and Deep BN1 Batch Normalization 124x124x32
IrisNet-B, their training and features used in experimental Conv2 Convolution /3×3/1 122x122x64 18k
analysis. DeepIrisNet-A is based on standard convolutional Pool1 Max-Pooling /2×2/2 61x61x64
BN2 Batch Normalization 61x61x64
layers [17] and DeepIrisNet-B utilizes stacking of inception Conv3 Convolution /3×3/1 59x59x128 73k
layers [18]. Relative to the data available for training, very BN3 Batch Normalization 59x59x128
shallow or over deep architectures may result into Conv4 Convolution /3×3/1 59x59x192 221k
underfitting or overfitting respectively. Hence, designing a Pool2 Max-Pooling /2×2/2 29x29x192
BN4 Batch Normalization 29x29x192
good CNN architecture is highly crucial to achieve an Conv5 Convolution / 3×3/1 27x27x256 442k
efficient and effective iris representation. Inspired by the BN5 Batch Normalization 27x27x256
very deep architectures proposed in [16,18,19], our Conv6 Convolution /3×3/1 25x25x320 737k
DeepIrisNets are also very deep; comprise of a large number Pool3 Max-Pooling /2×2/2 12x12x320
BN5 Batch Normalization 12x12x320
of convolution/inception layers. The architectures of the Conv7 Convolution /3×3/1 10x10x480 1382k
networks developed by us are given in Table 1 and Table 2, BN7 Batch Normalization 10x10x480
respectively. The DeepIrisNet -A, contains 8 convolutional Conv8 Convolution /3×3/1 8x8x512 2212k
layers (conv1 to conv8), each followed by batch Pool4 Max-Pooling /2×2/2 4x4x512
BN8 Batch Normalization 4x4x512
normalization [23]. There are 4 pooling layers, and pooling FC9 Fully Connected 4096 33558k
is performed after every two convolutional layers (with Drop10 Dropout-(50%)
batch normalization in between). The DeepIrisNet-B net is FC11 Fully Connected 4096 16781k
designed by first stacking convolutional layers (conv1 to Drop12 Dropout-(50%)
FC13 Fully Connected #classes
conv5), and then two inception layers (Inception6 and Cost14 Softmax
Inception7). The first two pooling operations (pool1 and
pool2) are applied after every two convolutional layers, TABLE 2: Architecture of DeepIrisNet-B
whereas conv5 is directly followed by pooling (pool3). Name Type/Kernel/Stride Output #params
In both configurations, normally we utilize very small Conv1-BN1-Conv2-pool1-BN2-Conv3-BN3-Conv4-Pool2-BN4-Conv5
size convolution kernels i.e. of size 3×3 (with stride of 1, (Conv1 to Conv5 is same as in DeepIrisNet-A) 27x27x256 757k
padding of 0). We also use filters of size 5x5 in first layer in Pool3 Max-Pooling/2×2/2 13x13x256
BN5 Batch Normalization 13x13x256
DeepIrisNet-A and inside inception layers of DeepIrisNet- #1x1-128,#3x3-192,#5x5-96,
B. Throughout, both networks, max-pooling is performed Inception6 #3×3reduce-64, #5×5reduce-32 13x13x480 380k
over a 2×2 pixel window with stride 2. The top three layers pool-avg, #proj-64
are fully connected i.e. each output neuron is connected to BN6 Batch Normalization 13x13x480
#1x1-192,#3x3-208,#5x5-48,
all inputs. The configuration of the fully connected (FC) Inception7 #3×3reduce-96, #5×5reduce-48 6x6x512 364k
layers is the same in all networks. The output of the last pool-max(2), #proj-64
fully connected layer is fed to a C-way softmax (where C is BN7 Batch Normalization 6x6x512
the number of classes) which produces a distribution over Pool4 Max-Pooling/2×2/2 3x3x512
FC8 Fully Connected 4096 18878k
the class labels. For regularization purpose, weight decay is
Drop9 Dropout-(50%)
set to 0.0005 and dropout is used after first and second fully FC10 Fully Connected 4096 16781k
connected layer at a rate of 0.5. Maximum width of Drop11 Dropout-(50%)
convolutional layers (No. of channels) is 512. The weights FC12 Fully Connected #classes
Cost13 Softmax
of the filters in the CNNs are initialized with zero mean
Gaussian distribution with standard deviation 0.01. Biases During testing, the softmax classifier layer is removed
are initialized to zero. The learning rate is set to 0.01 for all and rest of the DeepIrisNet is used as fixed feature extractor.
networks, and then decreased by factor of 10 when the We used the real-valued vectors output of the second fully
validation error rate stopped improving. We use Rectified connected layer in the network as iris representation feature
Linear Units (ReLU) activation function in all hidden layers. vector (4096-D). The similarity score is computed using, the
During training, optimization is performed using Euclidean distance.
stochastic gradient descent (SGD) with momentum set to
0.9. The gradients are computed by using back-propagation. 3. EXPERIMENTS AND RESULTS
Hyper-parameters are selected using ND-0405 [2] data-base
and are fixed in all experiments. The input to our DeepIris- A series of experiments are carried out to evaluate the
Net is a Gray iris image of 128×128 size (Fig.1) without any performance, robustness of DeepIrisNets using two publicly
preprocessing. The nets are trained using only single scale available iris databases; ND-iris-0405 [2], ND-CrossSensor-
and no data augmentation is performed. After each training Iris-2013[2]. ND-iris-0405 contains 64,980 images from 356
epoch, we observe the errors on validation dataset and select subjects. The images are captured using LG2200 iris images
the model that provides the lowest validation error. camera. In CrossSensor-Iris-2013 database, there are 29,986


TABLE 3: Details of Experiments and Evaluation Datasets
TRAINING AND VALIDATION TESTING
Exp. Data Unique Training Validation Target Unique Target Query Unique Query No. scores
Set irises set size set size set irises set size set irises set size match Non-match
1 ND-0405 400 49152 5856 ND-0405 192 2327 ND-0405 312 7400 138k 17081.k
2 LG2200 891 92335 10910 LG2200 279 2790 LG2200 461 9589 115k 26637k
3 LG4000 573 19279 2012 LG4000 445 1678 LG4000 779 6742 33k 11279k
4 LG2200 891 92335 10910 LG2200 223 2686 LG4000 223 2220 217k 5740k
5 LG2200+LG4000 100+100 1000+1000 200+200 LG2200 223 2686 LG4000 223 2220 217k 5740k
6 LG2200+ ND-0405 1269 142937 16924 ND-0405 192 2327 ND-0405 312 7400 138k 17081.k
7 LG2200+ ND-0405 1269 142937 16924 LG2200 279 2790 LG2200 461 9589 115k 26637k
128 160
from LG4000 and 116,564 images from LG2200 for 676 80
80
unique subjects. Most of the subjects in ND-0405 and 40
64

LG2200 subset in CrossSensor database are different. These 40 64 80


databases are the largest publicly available iris databases
and are used in our experiments due to their appropriateness Figure 1: Iris Image preparation for DeepIrisNet
to meet the research problem focused in this paper.
and normalization is performed similar to Daugman's rubber
3.1. Dataset Splits (Forming Training and Testing sets) sheet model [1] and all irises are normalized into rectangle
of 64x256 pixels. Features are extracted using 1-D Log–
For recognition performance evaluation we have designed Gabor filter and similarity score is computed using
Exp1, Exp2, and Exp3 as given in Table 3. To prepare Hamming distance between shifted (-8° to+8°) templates[1].
datasets for these experiments, first, the left and right iris
images are assigned with different class labels in all 3.4 Performance Evaluation
databases. Next, the databases are partitioned into 2 parts:
3.4.1 Single Sensor Matching
Part1 and Part2 with disjoint class labels. Part1 is created to
We conducted 3 experiments to evaluate the performance
generate train and validation sets which are used for model
(Single sensor matching) of the proposed DeepIrisNets-A: -
selection, and Part2 is used to create test sets i.e. for
Exp1: ND-0405vs.ND-0405, Exp2: LG2200vs.LG2200, and
performance reporting. The numbers of images for different
Exp3: LG4000vs.LG4000. The breakups of datasets used in
subjects in databases are not evenly distributed, i.e. some
experiments are given in Table 3. A comparison with
subjects have large number of images and some have very
baseline approach is also performed. The accuracies are
few irises. We sorted the identities by the number of images
reported in Fig. 2. A significant gain in accuracies is
and the identities which contain higher number of images
reported using proposed approach. DeepIrisNets-B and
are assigned to Part1 and those which contain fewer images
DeepIrisNets-A were found to obtain similar accuracy.
are assigned to Part2. Part2 is further divided into 2 parts:
Query and Target. Images in the target set represent images 3.4.2 Cross-Sensor Matching
known to the system and images in query set represent We evaluate the performance of DeepIrisNet-A for sensor
unknown images presented to the system for recognition. To interoperability. First, we performed Exp4: LG2200vs.
support unseen pair match problem, around 50% identities LG4000. The accuracy of DeepIrisNet-A for Exp4 is much
in query set are different (unique) than target set. better than baseline approach for cross-sensor. We assume
Further, to evaluate cross-sensor matching, we have that while upgrading the sensor from LG2200 (old) to
designed Exp4 and Exp5, and to evaluate performance on LG4000 (New), images for few subjects from LG4000 are
enlarged training sets, we have created Exp6 and Exp7. available. Using this small set (Exp5), we fine-tuned the
For performance evaluation, accuracies are reported weights of the pre-trained (Exp4) network. The accuracy
using ROC and verification rate (VR). The accuracies are obtained by fine tuned network is shown in Fig. 2, and it is
computed using query and target sets created in respective seen that a significant improvement in accuracy is obtained.
experiment.
3.5 Robustness Analysis
3.2 Iris segmentation and normalization
The methods based on IrisCode, encode each iris pixel and
For iris segmentation, we used a freely available system arrange them in a sequence. The matching score between
Osiris v4.1 [6]. The normalization of the iris region to polar two IrisCodes depends a lot on iris segmentation and even
coordinates (mapping to rectangular region) is done using small segmentation variations may produce large difference
Daugman’s rubber sheet model [1] (shown in Fig. 1). in similarity score [22]. In practical applications, while
3.3. Baseline Iris Recognition Algorithm capturing iris, the head may not be in same position with
respect to the camera. This causes the rotated iris pattern as
To perform a comparative performance analysis, we adopted well as other issues such as translation, scale etc. Most of
a well known Gabor based IR pipeline as baseline approach. the IR systems solve the scaling issue by normalizing iris
In the approach iris texture is segmented using Osiris system pattern into fixed size rectangle. But that too does not solve


the problem completely and hence, robust segmentation has 0.08 Exp5-DeepIrisNet-A
EER(%)
been a topic of great interest and still an open challenge for Exp3-DeepIrisNet-A Exp5-DeepIrisNet-A 1.91
Exp2-DeepIrisNet-A Exp3-DeepIrisNet-A 1.82
non ideal iris images. In-plane rotation is generally handled 0.06 Exp4-DeepIrisNet-A
Exp2-DeepIrisNet-A 2.40
by rotating the test iris template into left and right Exp3-Baseline
Exp4-DeepIrisNet-A 3.12

FRR
Exp2-Baseline
0.04
directions. The whole procedure of matching irises by Exp4-Baseline Exp1-DeepIrisNet-A 2.23
shifting left and right is highly time consuming and may not
Exp1-DeepIrisNet-A Exp1-DeepIrisNet-B 2.19
0.02 Exp1-DeepIrisNet-B
Exp3-Baseline 5.30
be very effective too, in case of non-linear transformations. Exp2-Baseline 7.12
The proposed DeepIrisNet shows invariance for such 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Exp4-Baseline 8.04
transformation variations. The invariance normally arises FAR

from the max-pooling steps in DeepIrisNet pipeline. To Figure 2: Performance Analysis using ROC
investigate this, we performed various experiments. TABLE 4: Effect of Segmentation (VR(%) at FAR=0.1%)
3.5.1. Invariance to Segmentation Variations Approach Osiris CAHT WAHET IFFP
To evaluate the robustness of DeepIrisNet towards small DeepIrisNet-A 97.31 96.05 97.02 93.34
Baseline 90.05 86.1 88.1 80.2
segmentation variations, we used 4 different well known
segmentation methods; CAHT [3], WAHET [15], Osiris [6] TABLE 5: Image Rotation Analysis (VR(%) at FAR=0.1%)
and IFFP [7]. The verification accuracies (at FAR=0.1%) -8 -6 -4 -2 0 +2 +4 +6 +8
DeepIrisNet-A 94.07 96.01 96.35 97.20 97.31 97.17 96.40 96.00 94.45
are computed using Exp1 and are given in Table 4. It is
Baseline 58.12 68.02 76.09 81.21 87.35 81.01 75.31 68.07 58.78
clearly seen that DeepIrisNet-A is comparatively more
robust than baseline when segmentation approach changes. TABLE 6: Input size Analysis (VR (%) at FAR=0.1%)
80x80 128x128 160x160
3.5.2. Invariance to Alignment/Rotation DeepIrisNet-A 93.79 97.31 97.10
DeepIrisNet-B 94.17 97.12 97.23
To investigate DeepIrisNet for invariance against rotation,
we used dataset in Exp1 with image size of 128×128. TABLE 7: Train Size Analysis (VR (%) at FAR=0.1%)
Training is conducted using non-rotated images but during Exp1 Exp2 Exp6 Exp7
DeepIrisNet-A 97.31 96.97 97.87 96.68
testing, query images are rotated (before feature extraction)
DeepIrisNet-B 97.42 96.85 97.90 97.78
within a range of -ρ pixels to +ρ pixels with step size of 2
pixels. In baseline approach, shifting ρ pixels, shifts 2ρ bits TABLE 8: Network Size Analysis (VR (%) at FAR=0.1%)
Val. Accuracy (%) VR at FAR 0.1 (%)
in IrisCode. The verification accuracies (at FAR=0.1%) for
DeepIrisNet-A (Exp1) 98.70 97.31
different pixel shifts is given in Table 5 and it can be Removal of Conv3 and Conv4 97.87 96.80
deduced that DeepIrisNet allows significant compensation Removal of Conv7 and Conv8 98.04 96.16
for rotational variations within a possible range. FC9 and FC11 with 2048 units 98.18 96.24
FC9 and FC11 with 8192 units 98.78 97.02
3.6. Analysis of other Network Parameters architecture. Removing the intermediate (3,4) or higher
3.6.1 Effect of Input iris Size conv. layers (7,8) or size change in FC layers (9,11)
Using our nets, we evaluated the impact of input image size decreases accuracy slightly. This justifies that depth and
under 3 different settings: i) 80x80, ii) 128x128, and iii) width of network is important to achieve good performance.
160x160. The verification rates (at FAR=0.1%) are reported
in Table 6 using settings of Exp1. It can be seen from the 4. CONCLUSION
results, that compared to 128x128 input image, in case of We introduced a new deep network named as DeepIrisNet
80x80 image size, there is a significant accuracy drop, but for iris representation. To investigate the effectiveness of
with 160x160 image, there is not much change. DeepIrisNet, we designed various experiments following
3.6.2 Effect of Training Size unseen pair matching paradigm, using large databases.
Size of training data has significant impact on performance. Empirically, we demonstrated that DeepIrisNet significantly
To analyze it, we merged LG2200 and ND-0405 datasets outperforms strong baseline based on descriptor and
and created a larger dataset (unique irises-2023, and total generalizes well to new datasets. We also demonstrated that
180359 images of 128x128 size). We evaluated the our models are robust to cross-sensor recognition and
performance of DeepIrisNet-A using settings of Exp6 and common segmentation and transformation variations.
Exp7, in which tests are same as in Exp1 and Exp2, Afterward, an improved model for cross-sensor matching is
respectively. The results are given in Table 7 and gain in presented by performing fine-tuning of the pretrained model
accuracy is reported with increase in training data. (DeepIrisNet trained on old sensor) using a subset of images
from new sensor. We then analyzed the impact of various
3.3.2 Effect of Network Size parameters on the performance of our model such as; size of
To investigate the effect of the network size on accuracy, training data, size of input image, and architectural changes
various adjustments and their effects are shown in Table 8. etc. As future work we will explore the contribution of
In each case, model is trained from scratch with the revised features extracted from other layers in DeepIrisNet.


5. REFERENCES [17] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E.
Howard, W. Hubbard, and L. D. Jackel, “ Backpropagation applied
[1] J. Daugman, “High confidence visual recognition of persons by to handwritten zip code recognition,” Neural Comput., 1(4):541–
a test of statistical independence,” IEEE Transactions on Pattern 551, December 1989.
Analysis and Machine Intelligence, vol. 15, pp. 1148-1161, 1993.
[18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D.
[2] https://sites.google.com/a/nd.edu/public-cvrl/data-sets Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. “Going
[3] C. Rathgeb, A. Uhl, and P. Wild, “Iris Recognition: From deeper with convolutions”. arXiv preprint arXiv:1409.4842, 2014.
Segmentation to Template Security,” Advances in Information
Security. Springer Verlag, 2013, vol. 59. [19] K. Simonyan and A. Zisserman. “Very deep convolutional
networks for large-scale image recognition”. arXiv preprint
[4] K. B. Raja, R. Raghavendra, V. K. Vemuri, and C. Busch. arXiv:1409.1556, 2014.
Smartphone based visible iris recognition using deep sparse
filtering. Pattern Recognition Letters, 2014. [20] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and
R. Salakhutdinov, “Improving neural networks by preventing co-
[5] Al-Raisi, A., Al-Khouri, A., “Iris recognition and the challenge adaptation of feature detectors,” CoRR, abs/1207.0580, 2012.
of homeland and border control security in UAE,” Telematics
Inform. 25(2), 117–132 (2008). doi: 10.1016/j.tele.2006.06.005 [21] V. Nair and G. E. Hinton, “Rectified linear units improve
restricted Boltzmann machines,” In Proc. ICML, 2010.
[6] D. Petrovska and A. Mayoue, “Description and documentation
of the biosecure software library,” Project No IST-2002-507634 - [22] Hugo Proenc¸a and Lu´ıs A Alexandre, “Iris recognition:
BioSecure, Deliverable, 2007. Analysis of the error rates regarding the accuracy of the
segmentation stage,” Image and vision computing, 28(1): 2010.
[7] A. Uhl and P. Wild, “Multi-stage visible wavelength and near
infrared iris segmentation framework,” In Proceedings of the [23] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
International Conference on Image Analysis and Recognition deep network training by reducing internal covariate shift,” arXiv
(ICIAR’12), ser. LNCS, 2012, pp. 1–10 preprint arXiv:1502.03167, 2015.

[8] Krizhevsky, A., Sutskever, I., and Hinton, G. E. , “ImageNet [24] S. Arora, M.Vatsa, R. Singh, A. Jain, “On iris camera
classification with deep convolutional neural net- works,” In NIPS, interoperability,” In: Fifth International Conference on Biometrics:
pp. 1106–1114, 2012. Theory, Applications and Systems, 2012, pp. 346–352

[9] Abhishek Gangwar, Akanksha Joshi and Zia Saquib, [25] R. Connaughton, A. Sgroi, K. Bowyer, P. Flynn, “A
“Collarette Region Recognition Based on Wavelets and Direct Multialgorithm Analysis of Three Iris Biometric Sensors,” IEEE
Linear Discriminant Analysis”, International Journal of Computer Transactions on Information Forensics and Security 7 (3) (2012)
Applications, Vol. 40 No.9, February 2012, pp. 35-39
[26] L. Xiao, Z. Sun, R. He, T. Tan, “Coupled feature selection for
[10] E. Krichen, L.Allano, S. Garcia-Salicetti, B. Dorizzi, cross-sensor iris recognition,”Sixth IEEE International Conference
“Specific texture analysis for iris recognition,” Lecture Notes in on Biometrics: Theory, Applications and Systems, 2013
Computer Science, vol. 3546, Springer, Heidelberg, 2005.
[27] J. K. Pillai, M. Puertas, R. Chellappa, “Cross-sensor iris
[11] C. Liu, M. Xie, “Iris recognition based on DLDA,” recognition through kernel learning,” IEEE TPAMI 2014.
International Conference on Pattern Recognition, 2006.
[28] K. Bowyer, S. Baker, A. Hentz, K. Hollingsworth, T. Peters,
[12] K. Roy, P. Bhattacharya, “Iris recognition with support vector and P. Flynn, “Factors That Degrade the Match Distribution in Iris
machines,” International Conference on Biometrics, 2006. Biometrics,” Identity in the Information Soc., 2009.

[13] J. Pillai, V. Patel, R. Chellappa and N. Ratha, “Secure and [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S.
robust iris recognition using random projections and sparse Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg,
representations,” IEEE TPAMI, vol. 33, pp. 1877-1893, 2011. and F.F. Li., “Imagenet large scale visual recognition challenge,”
IJCV, 2015.
[14] J. Thornton, M. Savvides, and B. V. K. V. Kumar, “A
Bayesian Approach to Deformed Pattern Matching of Iris Images,” [30] Silva, P., Luz, E., Baeta, R., Menotti, D., Pedrini, H., Falcao,
IEEE TPAMI, pp. 596–606, 2007. A.X., “An Approach to Iris Contact Lens Detection Based on Deep
Image Representations,” SIBGRAPI, 2015.
[15] A. Uhl and P. Wild, “Weighted Adaptive Hough and
Ellipsopolar transforms for real-time iris segmentation,” In [31] D. Menotti, G. Chiachia, A. Pinto, W. Schwartz, H. Pedrini,
Proceedings of the 5th IAPR/IEEE International Conference on A. Falcao, and A. Rocha, “Deep Representations for Iris, Face, and
Biometrics (ICB’12), 2012. Fingerprint Spoofing Detection,” IEEE TIFS, 2015.

[16] Min Lin, Qiang Chen, and Shuicheng Yan, “Network in


network,” CoRR, abs/1312.4400, 2013.



You might also like