Deepirisnet: Deep Iris Representation With Applications in Iris Recognition and Cross-Sensor Iris Recognition
Deepirisnet: Deep Iris Representation With Applications in Iris Recognition and Cross-Sensor Iris Recognition
TABLE 3: Details of Experiments and Evaluation Datasets
TRAINING AND VALIDATION TESTING
Exp. Data Unique Training Validation Target Unique Target Query Unique Query No. scores
Set irises set size set size set irises set size set irises set size match Non-match
1 ND-0405 400 49152 5856 ND-0405 192 2327 ND-0405 312 7400 138k 17081.k
2 LG2200 891 92335 10910 LG2200 279 2790 LG2200 461 9589 115k 26637k
3 LG4000 573 19279 2012 LG4000 445 1678 LG4000 779 6742 33k 11279k
4 LG2200 891 92335 10910 LG2200 223 2686 LG4000 223 2220 217k 5740k
5 LG2200+LG4000 100+100 1000+1000 200+200 LG2200 223 2686 LG4000 223 2220 217k 5740k
6 LG2200+ ND-0405 1269 142937 16924 ND-0405 192 2327 ND-0405 312 7400 138k 17081.k
7 LG2200+ ND-0405 1269 142937 16924 LG2200 279 2790 LG2200 461 9589 115k 26637k
128 160
from LG4000 and 116,564 images from LG2200 for 676 80
80
unique subjects. Most of the subjects in ND-0405 and 40
64
the problem completely and hence, robust segmentation has 0.08 Exp5-DeepIrisNet-A
EER(%)
been a topic of great interest and still an open challenge for Exp3-DeepIrisNet-A Exp5-DeepIrisNet-A 1.91
Exp2-DeepIrisNet-A Exp3-DeepIrisNet-A 1.82
non ideal iris images. In-plane rotation is generally handled 0.06 Exp4-DeepIrisNet-A
Exp2-DeepIrisNet-A 2.40
by rotating the test iris template into left and right Exp3-Baseline
Exp4-DeepIrisNet-A 3.12
FRR
Exp2-Baseline
0.04
directions. The whole procedure of matching irises by Exp4-Baseline Exp1-DeepIrisNet-A 2.23
shifting left and right is highly time consuming and may not
Exp1-DeepIrisNet-A Exp1-DeepIrisNet-B 2.19
0.02 Exp1-DeepIrisNet-B
Exp3-Baseline 5.30
be very effective too, in case of non-linear transformations. Exp2-Baseline 7.12
The proposed DeepIrisNet shows invariance for such 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Exp4-Baseline 8.04
transformation variations. The invariance normally arises FAR
from the max-pooling steps in DeepIrisNet pipeline. To Figure 2: Performance Analysis using ROC
investigate this, we performed various experiments. TABLE 4: Effect of Segmentation (VR(%) at FAR=0.1%)
3.5.1. Invariance to Segmentation Variations Approach Osiris CAHT WAHET IFFP
To evaluate the robustness of DeepIrisNet towards small DeepIrisNet-A 97.31 96.05 97.02 93.34
Baseline 90.05 86.1 88.1 80.2
segmentation variations, we used 4 different well known
segmentation methods; CAHT [3], WAHET [15], Osiris [6] TABLE 5: Image Rotation Analysis (VR(%) at FAR=0.1%)
and IFFP [7]. The verification accuracies (at FAR=0.1%) -8 -6 -4 -2 0 +2 +4 +6 +8
DeepIrisNet-A 94.07 96.01 96.35 97.20 97.31 97.17 96.40 96.00 94.45
are computed using Exp1 and are given in Table 4. It is
Baseline 58.12 68.02 76.09 81.21 87.35 81.01 75.31 68.07 58.78
clearly seen that DeepIrisNet-A is comparatively more
robust than baseline when segmentation approach changes. TABLE 6: Input size Analysis (VR (%) at FAR=0.1%)
80x80 128x128 160x160
3.5.2. Invariance to Alignment/Rotation DeepIrisNet-A 93.79 97.31 97.10
DeepIrisNet-B 94.17 97.12 97.23
To investigate DeepIrisNet for invariance against rotation,
we used dataset in Exp1 with image size of 128×128. TABLE 7: Train Size Analysis (VR (%) at FAR=0.1%)
Training is conducted using non-rotated images but during Exp1 Exp2 Exp6 Exp7
DeepIrisNet-A 97.31 96.97 97.87 96.68
testing, query images are rotated (before feature extraction)
DeepIrisNet-B 97.42 96.85 97.90 97.78
within a range of -ρ pixels to +ρ pixels with step size of 2
pixels. In baseline approach, shifting ρ pixels, shifts 2ρ bits TABLE 8: Network Size Analysis (VR (%) at FAR=0.1%)
Val. Accuracy (%) VR at FAR 0.1 (%)
in IrisCode. The verification accuracies (at FAR=0.1%) for
DeepIrisNet-A (Exp1) 98.70 97.31
different pixel shifts is given in Table 5 and it can be Removal of Conv3 and Conv4 97.87 96.80
deduced that DeepIrisNet allows significant compensation Removal of Conv7 and Conv8 98.04 96.16
for rotational variations within a possible range. FC9 and FC11 with 2048 units 98.18 96.24
FC9 and FC11 with 8192 units 98.78 97.02
3.6. Analysis of other Network Parameters architecture. Removing the intermediate (3,4) or higher
3.6.1 Effect of Input iris Size conv. layers (7,8) or size change in FC layers (9,11)
Using our nets, we evaluated the impact of input image size decreases accuracy slightly. This justifies that depth and
under 3 different settings: i) 80x80, ii) 128x128, and iii) width of network is important to achieve good performance.
160x160. The verification rates (at FAR=0.1%) are reported
in Table 6 using settings of Exp1. It can be seen from the 4. CONCLUSION
results, that compared to 128x128 input image, in case of We introduced a new deep network named as DeepIrisNet
80x80 image size, there is a significant accuracy drop, but for iris representation. To investigate the effectiveness of
with 160x160 image, there is not much change. DeepIrisNet, we designed various experiments following
3.6.2 Effect of Training Size unseen pair matching paradigm, using large databases.
Size of training data has significant impact on performance. Empirically, we demonstrated that DeepIrisNet significantly
To analyze it, we merged LG2200 and ND-0405 datasets outperforms strong baseline based on descriptor and
and created a larger dataset (unique irises-2023, and total generalizes well to new datasets. We also demonstrated that
180359 images of 128x128 size). We evaluated the our models are robust to cross-sensor recognition and
performance of DeepIrisNet-A using settings of Exp6 and common segmentation and transformation variations.
Exp7, in which tests are same as in Exp1 and Exp2, Afterward, an improved model for cross-sensor matching is
respectively. The results are given in Table 7 and gain in presented by performing fine-tuning of the pretrained model
accuracy is reported with increase in training data. (DeepIrisNet trained on old sensor) using a subset of images
from new sensor. We then analyzed the impact of various
3.3.2 Effect of Network Size parameters on the performance of our model such as; size of
To investigate the effect of the network size on accuracy, training data, size of input image, and architectural changes
various adjustments and their effects are shown in Table 8. etc. As future work we will explore the contribution of
In each case, model is trained from scratch with the revised features extracted from other layers in DeepIrisNet.
5. REFERENCES [17] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E.
Howard, W. Hubbard, and L. D. Jackel, “ Backpropagation applied
[1] J. Daugman, “High confidence visual recognition of persons by to handwritten zip code recognition,” Neural Comput., 1(4):541–
a test of statistical independence,” IEEE Transactions on Pattern 551, December 1989.
Analysis and Machine Intelligence, vol. 15, pp. 1148-1161, 1993.
[18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D.
[2] https://sites.google.com/a/nd.edu/public-cvrl/data-sets Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. “Going
[3] C. Rathgeb, A. Uhl, and P. Wild, “Iris Recognition: From deeper with convolutions”. arXiv preprint arXiv:1409.4842, 2014.
Segmentation to Template Security,” Advances in Information
Security. Springer Verlag, 2013, vol. 59. [19] K. Simonyan and A. Zisserman. “Very deep convolutional
networks for large-scale image recognition”. arXiv preprint
[4] K. B. Raja, R. Raghavendra, V. K. Vemuri, and C. Busch. arXiv:1409.1556, 2014.
Smartphone based visible iris recognition using deep sparse
filtering. Pattern Recognition Letters, 2014. [20] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and
R. Salakhutdinov, “Improving neural networks by preventing co-
[5] Al-Raisi, A., Al-Khouri, A., “Iris recognition and the challenge adaptation of feature detectors,” CoRR, abs/1207.0580, 2012.
of homeland and border control security in UAE,” Telematics
Inform. 25(2), 117–132 (2008). doi: 10.1016/j.tele.2006.06.005 [21] V. Nair and G. E. Hinton, “Rectified linear units improve
restricted Boltzmann machines,” In Proc. ICML, 2010.
[6] D. Petrovska and A. Mayoue, “Description and documentation
of the biosecure software library,” Project No IST-2002-507634 - [22] Hugo Proenc¸a and Lu´ıs A Alexandre, “Iris recognition:
BioSecure, Deliverable, 2007. Analysis of the error rates regarding the accuracy of the
segmentation stage,” Image and vision computing, 28(1): 2010.
[7] A. Uhl and P. Wild, “Multi-stage visible wavelength and near
infrared iris segmentation framework,” In Proceedings of the [23] S. Ioffe and C. Szegedy. Batch normalization: Accelerating
International Conference on Image Analysis and Recognition deep network training by reducing internal covariate shift,” arXiv
(ICIAR’12), ser. LNCS, 2012, pp. 1–10 preprint arXiv:1502.03167, 2015.
[8] Krizhevsky, A., Sutskever, I., and Hinton, G. E. , “ImageNet [24] S. Arora, M.Vatsa, R. Singh, A. Jain, “On iris camera
classification with deep convolutional neural net- works,” In NIPS, interoperability,” In: Fifth International Conference on Biometrics:
pp. 1106–1114, 2012. Theory, Applications and Systems, 2012, pp. 346–352
[9] Abhishek Gangwar, Akanksha Joshi and Zia Saquib, [25] R. Connaughton, A. Sgroi, K. Bowyer, P. Flynn, “A
“Collarette Region Recognition Based on Wavelets and Direct Multialgorithm Analysis of Three Iris Biometric Sensors,” IEEE
Linear Discriminant Analysis”, International Journal of Computer Transactions on Information Forensics and Security 7 (3) (2012)
Applications, Vol. 40 No.9, February 2012, pp. 35-39
[26] L. Xiao, Z. Sun, R. He, T. Tan, “Coupled feature selection for
[10] E. Krichen, L.Allano, S. Garcia-Salicetti, B. Dorizzi, cross-sensor iris recognition,”Sixth IEEE International Conference
“Specific texture analysis for iris recognition,” Lecture Notes in on Biometrics: Theory, Applications and Systems, 2013
Computer Science, vol. 3546, Springer, Heidelberg, 2005.
[27] J. K. Pillai, M. Puertas, R. Chellappa, “Cross-sensor iris
[11] C. Liu, M. Xie, “Iris recognition based on DLDA,” recognition through kernel learning,” IEEE TPAMI 2014.
International Conference on Pattern Recognition, 2006.
[28] K. Bowyer, S. Baker, A. Hentz, K. Hollingsworth, T. Peters,
[12] K. Roy, P. Bhattacharya, “Iris recognition with support vector and P. Flynn, “Factors That Degrade the Match Distribution in Iris
machines,” International Conference on Biometrics, 2006. Biometrics,” Identity in the Information Soc., 2009.
[13] J. Pillai, V. Patel, R. Chellappa and N. Ratha, “Secure and [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S.
robust iris recognition using random projections and sparse Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg,
representations,” IEEE TPAMI, vol. 33, pp. 1877-1893, 2011. and F.F. Li., “Imagenet large scale visual recognition challenge,”
IJCV, 2015.
[14] J. Thornton, M. Savvides, and B. V. K. V. Kumar, “A
Bayesian Approach to Deformed Pattern Matching of Iris Images,” [30] Silva, P., Luz, E., Baeta, R., Menotti, D., Pedrini, H., Falcao,
IEEE TPAMI, pp. 596–606, 2007. A.X., “An Approach to Iris Contact Lens Detection Based on Deep
Image Representations,” SIBGRAPI, 2015.
[15] A. Uhl and P. Wild, “Weighted Adaptive Hough and
Ellipsopolar transforms for real-time iris segmentation,” In [31] D. Menotti, G. Chiachia, A. Pinto, W. Schwartz, H. Pedrini,
Proceedings of the 5th IAPR/IEEE International Conference on A. Falcao, and A. Rocha, “Deep Representations for Iris, Face, and
Biometrics (ICB’12), 2012. Fingerprint Spoofing Detection,” IEEE TIFS, 2015.