Iris_Segmentation_Approach_Based_on_Adaptive_Threshold_Value_and_Circular_Hough_Transform
Iris_Segmentation_Approach_Based_on_Adaptive_Threshold_Value_and_Circular_Hough_Transform
Abstract—Researchers have proposed several approaches boundary is considered as a strong one [1]. The weakness of
to provide processing methodologies for iris images captured in iris boundary represents a challenging area for researchers.
unconstrained mediums to leverage the level of accuracy for Also, normalization is an important stage to overcome the
iris recognition systems. Segmentation is the most critical stage problem of size inconstancies by transforming the segmented
which considered a challenging area to researchers. In this region from Cartesian coordinates to corresponding polar
paper, we propose an iris segmentation approach to handle the coordinates, thus providing a fixed rectangular region [3].
problem of low contrast iris images, in which the iris boundary
is undetected. It uses the pupil boundary to define a search The main motivation of this approach is the problem of
space for automatically finding an appropriate threshold value undetected iris boundaries in low contrast images which can
to extract the iris region, and then uses the thresholded image be an obstacle in unconstrained iris recognition systems.
to create binary edge map with strong iris edge. Circular
Hough Transform (CHT) is adopted to localize pupil/iris The contributions of this approach can be summarized in
boundaries, and Rubber Sheet Model (RSM) of lower half of the following points:
iris is used in normalization stage to eliminate upper eyelashes
x An automatic thresholding of iris region without any
and eyelid. Contrast-Limited Adaptive Histogram Equalization
prior image adjustment or intensive analysis.
(CLAHE) technique is adopted to overcome the low contrast
problem of iris image. Finally, a region of interest without the x The proposed method considers that the iris and pupil
impact of lower eyelashes and eyelid is selected to obtain noise centers are not concentric.
free iris template. The proposed approach is tested on CASIA
Iris Image Dataset Version 2.0. x The proposed method considers that the iris region is
not exactly a circular.
Keywords—Iris, Segmentation, Thresholding, Circular
Hough Transform, and Rubber Sheet Model. The rest of this paper is organized as follows. Sec. II
states the related work on iris segmentation. Sec. III
I. INTRODUCTION describes theoretical aspects of major algorithms
The human iris has extensively rich patterns. Iris patterns implemented in this paper. The proposed approach is
have the advantages of randomness, uniqueness, complexity, illustrated in Sec. IV. Experimental results are discussed in
and stability over human lifetime. These advantages make Sec. V. Finally, the conclusions are presented in Sec. VI.
the iris an ideal candidate for biometric systems [1]. II. RELATED WORK
Iris recognition system is divided into four main Segmentation is the process of localizing the inner and
modules: iris segmentation, iris normalization, feature outer boundaries of the iris region. The most of existing
extraction, and matching [2]. Segmentation is the most approaches require either intensive analysis or prior image
crucial stage, since it affects all subsequent stages. The iris adjustment. In 2014, Saad et al. [4] presented a method to
region is surrounded by two circles, the inner circle localize iris region in images obtained under unconstrained
corresponds to pupil boundary and the outer circle conditions. They used contrast stretching technique to
corresponds to iris boundary. Fig.1 shows the relevant parts enhance the contrast and illumination in an iris image. Then,
of human eye. they applied local integration, to enhance the contrast level
between the white and black regions of the image. In 2015,
Dixit and Kazi [5] suggested an optimization to the Integro-
Differential Operator (IDO) to ignore all circles if any pixel
on this circle has an intensity higher than a determined
threshold. In 2016, Abdullah et al. [6] proposed a method to
detect iris for visible spectrum and near infrared images.
Their method detects the iris region by means of active
contour for both shrunken and expanded images. In 2016,
Liu et al. [7] suggested two different iris segmentation
methods, where the first method is based on Hierarchical
Fig. 1. Relevant parts of human eye Convolutional Neural Networks (HCNN), and the second is
based on Multi-scale Fully Convolutional Network (MFCN).
In general, the variation across iris boundary is less Their experiment showed that the MFCN-based method has
compared to the pupil boundary. This makes the iris better performance than HCNN-based one. In 2017, Badejo
boundary to be considered as a weak edge, while the pupil
et al. [8] proposed a segmentation algorithm for low contrast where Gx and Gy are the gradients in the horizontal and
images. Their algorithm consists of a texture-based k-Nearest vertical directions respectively. Then the edges are marked
Neighbors (K-NN) trained to detect pupil/iris boundaries according to the gradients of that have large magnitudes.
within an iris image. Their approach showed segmentation 1) Non-maximum suppression: Only the pixels that are
accuracy of 92% on 1,898 low contrast images. In 2018,
considered to be part of an edge should be marked as edges.
Sardar et al. [9] proposed an approach for segmenting iris
images captured in non-ideal environment. Their approach is 2) Double thresholding: Determining potential edges by
based on rough entropy and Circular Sector Analysis (CSA), thresholding.
where a thresholding by using rough entropy is adopted to 3) Edge tracking by hysteresis: Determining final edges
localize the pupil and CSA is used to localize the iris. In by suppressing all edges that are not connected to a strong
2018, Okokpujie et al. [10] presented an experimental edge.
comparison between implementing IDO approach and
Circular Hough Transform (CHT) approach for D. Circular Hough Transform (CHT)
segmentation. Their study showed that CHT gave an CHT is designed to find circles of a known radius in a
accuracy of 93.06%, while IDO gave an accuracy of 91.39% given image. The convenient formula for a circle equation is
which is less by 1.67% than CHT. expressed as follows [14]:
¦qBH ( p) w( p, q)[ I (q) I (q)( p q)] (1)
As the algorithm runs, each point (x, y) on a circle with a
known radius in the original image can define a circle
I ( p)
¦qBH ( p) w( p, q) centered at (x, y) in the parametric space. As described in
Fig. 2, if all circles in the parameter space intersect at a
single point, then the intersection point represents the center
Where I(p) is the approximation of image in point p, given point of the original circle.
the image I(q), q represents all points ϵ Bε(p), w(q, p) is the
weighted function, and I(q) denotes the gradient
magnitudes of point q.
B. Gaussian Filter
Gaussian filter is used for noise reduction and blurring as
a preprocessing step to remove small details from an image,
also, for bridging small gaps in lines or curves. The form of
two dimensional Gaussian filter is described as follows [12]:
Fig. 2. Principle of CHT
1
f ( x, y ) u exp[ ( x 2 y 2 ) /( 2V 2 )] (2) E. Rubber Sheet Model (RSM)
2SV 2
RSM is a method proposed by Daugman to transform the
Smoothing degree is depending on the value of standard iris region by from Cartesian coordinates, to corresponding
deviation (σ) of Gaussian distribution. polar coordinates. The transformation is modeled, as follows
[15]:
C. Canny Edge Detector (CED)
Applying edge detection reduces the search space with I ( x(r,T ), y(r,T )) o I (r,T ) (7)
preserving fundamental structural in an image. CED has
become one of the standard edge detection algorithms. It with
runs in 5 steps [13]:
1) Smoothing: The image is smoothed using Gaussian x ( r, T ) (1 r ) x p (T )) rxi (T ) ½ (8)
¾
filter to remove noise. y ( r, T ) (1 r ) y p (T )) ryi (T ) ¿
2) Finding gradients: The gradient magnitudes of the
image are specified by applying the law of Pythagoras, as where I(x, y) represents the iris region, (x, y) represents the
follows: Cartesian coordinate, (x, θ) denotes the polar coordinates
correspond to (x, y), (xp, yp) is the coordinates of pupil
boundary and (xi, yi) is the coordinates of iris boundary, with
G G x2 G y2 (3)
θ direction. Fig. 3 describes the model.
33
Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:10:42 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Kurdistan Region – Iraq
k
1
yk
N
¦n
j 0
j
(9)
34
Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:10:42 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Kurdistan Region – Iraq
B. Iris Segmentation
Next stage is to segment the iris. It includes the following
sub stages: image smoothing, pupil localization, and iris
localization. Considering pupil and iris as circles, CHT can
be implemented to detect these circles.
1) Image Smoothing: Noise can affect pupil/iris
boundaries detection. The two dimensional Gaussian filter is
used to reduce the noise and connect small gaps in the
image. Smoothing image can reduce unnecessary edges, Fig. 7. The proposed method to find proper threshold
thus improving the execution time of applying CHT by
ignoring unnecessary details. A binary image is created, based on detected threshold
2) Pupil Localization: Although, pupil boundary is value, and then the iris boundary is localized with radius
considered a strong edge, detecting pupils would fail when range = (rest± 10), following the same steps described in
dealing with low contrast iris images. After noise reduction, pupil localization stage.
the iris image is thresholded to filter out pupil region. Then,
Algorithm I Finding Threshold.
the thresholded image is flood-filled to fill the holes in the
extracted region. Choosing a threshold value for pupil Input: I // iris image
region is easy, because of its darkness. An edge map for rinner // inner circle radius
flood-filled image is then extracted by applying CED. router // outer circle radius
Finally, CHT is implemented to find pupil boundary in a (xp, yp) // pupil center point
given edge map. Using binary edge map rather than the rp // pupil radius
Aε(xp, yp) // iris center lookup area
original image, reduces the search space for pupil boundary
θright // the range (13π/12, 7π/6)
only to the points in the edge map, where each point vote to
θleft // the range (11π/6, 23π/12)
instantiate parameters for a particular contour.
Output: ithresh // threshold for iris boundary
3) Iris Localization: Iris edge is considered a weak rest // the estimated iris boundary
edge, because of low variation across the iris boundary. Begin
Choosing a threshold value for such region is an obstacle. for all (xi, yi) ϵ Aε(xp, yp) // estimated iris center
To overcome this problem, the pupil boundary detected for ri in range (rinner, router)
earlier is used to find an appropriate threshold for iris for θ ϵ θright θleft
region. The proposed method for finding the threshold is intensities[] ← I(xi – ri sin(θ) , yi + ri cos(θ))
illustrated in ALGORITHM I. Based on localized pupil end for
center (xp, yp), we propose two circles with known radius Calculate the summation for right half sumright
rinner and router. These circles are assumed to be surrounding Calculate the summation for left half sumleft
the estimated iris boundary. The summation of intensity Calculate right difference diffright
values between predefined circles are computed over the Calculate left difference diffleft
arcs of two sectors defined by angular range (θ = (195°, if diffright > diffmax then
210°)( 330°, 345°)), as shown in Fig. 7. These sectors are ithresh ← Average(intensities[θright])
relatively occlusion free, thus providing correct threshold rest ← ri // the value to define radius range
detection. Iris and pupil centers are not strictly concentric, end if
but they are extremely close. So, a search area of five pixels if diffleft > diffmax then
around the pupil center is used to estimate the iris center. ithresh ← Average(intensities[θleft])
Moreover, since the iris is not exactly a circle, the rest ← ri // the value to define radius range
summation of intensities in each sector is computed end if
independently, thus increasing the computation accuracy. end for
end for
The maximum variation among intensity summations
End
indicates the iris boundary, then, the average of intensities
of current summation is chosen to be the threshold value for C. Lowe Half Normalization
creating the thrsholded image. Also, the arc corresponding In this approach, the RSM of lower half of iris is
to the maximum variation can be used to estimate the iris implemented to eliminate the noise due to upper eyelashes
radius range when applying CHT. The intensity values over and eyelid. As explained Section III, the implementation
the arcs of two sectors are obtained by the following steps are same as those of basic RSM, but the range of θ is
formula: made of (π, 2π) instead of (0, 2π). This method is highly
effective when the most of upper half of the iris region is
Intensity I ( xi ri sin(T ), yi ri cos(T )) (10) occluded and has no patterns to be extracted. Therefore,
choosing non-occluded portion instead of full iris region
becomes unavoidable. Choosing the half of the iris region for
where I is the iris image, (xi, yi) represents all the points matching would not affect the recognition accuracy, since the
around the pupil center in 11×11 search area, ri denotes the highly randomized patterns makes any portion of iris for two
range between the radii of predefined circles, and θ persons almost impossible to be matched. ALGORITHM II
represents the angular range of non-occluded sectors.
35
Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:10:42 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Kurdistan Region – Iraq
describes briefly the implementation steps of lower half Fig. 8 shows how with a threshold value = 180, specular
RSM. reflection spots are removed from image without affecting
iris region.
Algorithm II Lower half RSM
Input: I // iris image
(xpupil, ypupil) // pupil center point
rpupil // pupil radius
riris // iris radius
Output: Inorm // normalized iris image
Begin
for θ in range (π, 2π) // the angular range of lower portion Fig. 8. Specular reflection removal, (a) original image, and (b) reflection
removed image
xinner ← xpupil + rpupil * cos(θ) // get (x, y)
yinner ← ypupil + rpupil * sin(θ) coordinates of All segmentation steps are described by images in Fig. 9.
xouter ← xpupil + riris * cos(θ) inner and outer The images are chosen from both subsets of CASIA Dataset.
youter ← ypupil + riris * sin(θ) boundaries These images suffer from a problem of iris segmentation.
for r in range (rpupil, riris) Applying traditional methods to localize the iris boundaries
rnorm ← r / (riris - rpupil) // normalized radius will fail immediately. The experiment showed that the most
xc← (1–rnorm) * xinner + rnorm * xouter // Cartesian images of subset 1 have a problem of iris edge detection,
yc ←(1–rnorm) * yinner + rnorm * youter coordinates while the most images of subset 2 have a problem of both iris
Inorm (θ, r)← I(xc, yc) // polar mapping and pupil edge detection. Group (b) of Fig. 9 shows the
end for results of applying CED algorithm directly on the original
end for images without passing them through the proposed system
End stages.
Total 2400
Fig. 10. Full RSM and lower half RSM (a) fully normalized image of size
(420×60), and (b) half normalized image of size (210×60)
36
Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:10:42 UTC from IEEE Xplore. Restrictions apply.
2020 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Kurdistan Region – Iraq
Images after normalization are enhanced by using Force with a Noncircular Normalization,” IEEE Trans. Syst. Man,
CLAHE with clip limit = 2.0, and block size (32×32). Fig. 11 Cybern. Syst., vol. 47, no. 12, pp. 3128–3141, 2016.
shows that how CLAHE has advantage over AHE. [7] N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, and T. Tan, “Accurate iris
segmentation in non-cooperative environments using fully
convolutional networks,” 2016 Int. Conf. Biometrics, ICB 2016,
2016.
[8] J. A. Badejo, A. A. Atayero, and T. S. Ibiyemi, “A robust
preprocessing algorithm for iris segmentation from low contrast eye
images,” in FTC 2016 - Proceedings of Future Technologies
Conference, 2017.
[9] M. Sardar, S. Mitra, and B. Uma Shankar, “Iris localization using
rough entropy and CSA: A soft computing approach,” Appl. Soft
Comput. J., vol. 67, pp. 61–69, 2018.
Fig. 11. A comparison between CLAHE and AHE (a) low contrast image [10] K. Okokpujie, E. Noma-Osaghae, S. John, and A. Ajulibe, “An
(b) enhanced image by using AHE, and (c) enhanced image by using improved iris segmentation technique using circular Hough
CLAHE transform,” in IT Convergence and Security 2017, Springer, 2018, pp.
203–211.
Finally, only the upper portion of size (210×48) is copped [11] A. Telea, “An image inpainting technique based on the fast marching
method,” J. Graph. tools, vol. 9, no. 1, pp. 23–34, 2004.
from the enhanced image as ROI. The lower eyelashes and
[12] C. Solomon and T. Breckon, Fundamentals of Digital Image
eyelid are eliminated as the lower portion of size (210×12) is Processing: A practical approach with examples in Matlab. John
discarded. The resultant image, as shown in Fig. 12, is Wiley & Sons, 2011.
relatively noise free. The dashed line separates the ROI from [13] Gaurav Mandloi, “A Survey on Feature Extraction Techniques for
the discarded region. Color Images,” Int. J. Comput. Sci. Inf. Technol., vol. 03, no. 3, pp.
14–18, 2013.
[14] A. S. Hassanein, S. Mohammad, M. Sameer, and M. E. Ragab, “A
Survey on Hough Transform, Theory, Techniques and Applications,”
arXiv Prepr. arXiv1502.02160, 2015.
[15] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst.
Fig. 12. Selecting ROI (a) enhanced image of size (210×60), and (b) ROI Man, Cybern. Part B, vol. 37, no. 5, pp. 1167–1175, 2007.
of size (210×48) [16] A. Murugan and G. Savithiri, “Feature extraction on half iris for
personal identification,” Proc. 2010 Int. Conf. Signal Image Process.
VI. CONCLUSIONS ICSIP 2010, vol. 1, no. 1, pp. 197–200, 2010.
[17] G. Savithiri and A. A.Murugan, “‘Performance Analysis on Half Iris
Images obtained under unconstrained conditions suffer Feature Extraction using GW, LBP and HOG,’” Int. J. Comput. Appl.,
from low contrast problem. In this paper an effective iris vol. 22, no. 2, pp. 27–32, 2011.
segmentation approach has been proposed to deal with low [18] R. Krutsch and D. Tenorio, “Histogram equalization,” Doc. Number
contrast iris images. By using adaptive threshold value, there AN4318, Appl. Note Rev. 0, 2011.
is no need to define the threshold for extracting the iris [19] R. Singh and B. Patel, “A Method of Medical Image Contrast
region. Once the pupil boundary is detected, the threshold for Enhancement Using Two Steps of Contrast Limited Adaptive
iris region is automatically determined. Therefore, the Histogram Equalization,” Int. J. Sci. Eng. Technol. Res., vol. 7, no. 3,
pp. 2278–7798, 2018.
problem of weak iris edges is eliminated. The method mainly
depends on the success of pupil localization, if the process of [20] S. Jenifer, S. Parasuraman, and A. Kadirvelu, “Contrast enhancement
and brightness preserving of digital mammograms using fuzzy
pupil localization fails, the method fails. However, defining a clipped contrast-limited adaptive histogram equalization algorithm,”
threshold to extract the pupil is not an issue, since the Appl. Soft Comput. J., vol. 42, pp. 167–177, 2016.
variation across the pupil boundary is relatively high. The
approach is tested on CASIA Dataset Version 2.0. The
images of this dataset have a problem of low contrast, and
most of them suffer from undetected pupil/iris boundaries.
Applying traditional segmentation methods on these images
would fail immediately. The proposed approach can be relied
on for iris recognition systems of low contrast images.
REFERENCES
[1] C. Houston, “Iris Segmentation and Recognition Using Circular
Hough Transform and Wavelet Features,” Rochester Inst. Technol.,
2010.
[2] M. S. Singh and S. Singh, “Iris Segmentation Along with Noise
Detection using Hough Transform,” Int. J. Eng. Tech. Res., vol. 3, no.
5, pp. 440–444, 2015.
[3] M. R. Faundra and D. R. Sulistyaningrum, “Iris Segmentation and
Normalization Algorithm Based on Zigzag Collarette,” in Journal of
Physics: Conference Series, 2017, vol. 795, no. 1, p. 12049.
[4] I. A. Saad, L. E. George, and A. A. Tayyar, “Accurate and fast pupil
localization using contrast stretching, seed filling and circular
geometrical constraints,” J. Comput. Sci., 2014.
[5] A. J. Dixit and M. K. S. Kazi, “IRIS Recognition by Daugman‟ s
Method,” IJLTEMAS, India, vol. 4, 2015.
[6] M. A. M. Abdullah, S. S. Dlay, W. L. Woo, and J. A. Chambers,
“Robust Iris Segmentation Method Based on a New Active Contour
37
Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on February 14,2024 at 09:10:42 UTC from IEEE Xplore. Restrictions apply.