HINDAWI-2
HINDAWI-2
Research Article
Analysis of Vessel Segmentation Based on Various Enhancement
Techniques for Improvement of Vessel Intensity Profile
Correspondence should be addressed to SeongKi Kim; [email protected] and Muhammad Fazal Ijaz; [email protected]
Received 13 May 2022; Revised 31 May 2022; Accepted 7 June 2022; Published 28 June 2022
Copyright © 2022 Sonali Dash et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
It is vital to develop an appropriate prediction model and link carefully to measurable events such as clinical parameters and
patient outcomes to analyze the severity of the disease. Timely identifying retinal diseases is becoming more vital to prevent
blindness among young and adults. Investigation of blood vessels delivers preliminary information on the existence and treatment
of glaucoma, retinopathy, and so on. During the analysis of diabetic retinopathy, one of the essential steps is to extract the retinal
blood vessel accurately. This study presents an improved Gabor filter through various enhancement approaches. The degraded
images with the enhancement of certain features can simplify image interpretation both for a human observer and for machine
recognition. Thus, in this work, few enhancement approaches such as Gamma corrected adaptively with distributed weight
(GCADW), joint equalization of histogram (JEH), homomorphic filter, unsharp masking filter, adaptive unsharp masking filter,
and particle swarm optimization (PSO) based unsharp masking filter are taken into consideration. In this paper, an effort has been
made to improve the performance of the Gabor filter by combining it with different enhancement methods and to enhance the
detection of blood vessels. The performance of all the suggested approaches is assessed on publicly available databases such as
DRIVE and CHASE_DB1. The results of all the integrated enhanced techniques are analyzed, discussed, and compared. The best
result is delivered by PSO unsharp masking filter combined with the Gabor filter with an accuracy of 0.9593 for the DRIVE
database and 0.9685 for the CHASE_DB1 database. The results illustrate the robustness of the recommended model in automatic
blood vessel segmentation that makes it possible to be a clinical support decision tool in diabetic retinopathy diagnosis.
length analysis, orientation, and thickness can make clear the presented to classify a pixel as a vessel or a nonvessel [34].
assessment of retinopathy of prematurity, identification of Two different approaches are compared for blood vessel
reduction arteriolar, and evaluation of vessel width for the extraction. In the first approach, they have employed
recognition of ailments such as diabetes, arteriosclerosis, Gaussian filtering for preprocessing, LoG filtering to en-
hypertension, and so on [1–3]. We can read about many hance the retinal image, and adaptive thresholding for the
ideas to improve the blood vessel segmentation by com- segmentation task. In the second approach, they have uti-
puting the contrast of retinal blood vessels and background. lized unsharp masking for preprocessing, Gabor wavelet to
Computerized evaluation of vasculatures has been exten- enhance the retinal image, and global thresholding for the
sively recognized as the initial stage in the progress of a segmentation task [35]. A technique is suggested for the
computer-aided investigative scheme for ocular ailments. green channel noise reduction of the retina by employing a
Several suggested rules have been recommended for vessel low pass radius filter and followed by the Gabor filter and a
segmentation [4]. Few commonly suggested algorithms for Gaussian fractional derivative for enhancement of blood
vessel extraction are discussed. Few authors have proposed a vessels [36]. Gabor filter is extended by integrating Gabor,
vessel segmentation process using a matched filter. Images at Frangi, and Gaussian filters with top-hat transform [37]. A
different scales are convolved with the filter, and the highest new technique is introduced to design a set of 180 Gabor
output is noted at each pixel [5–7]. With an assumption of filters with variable scales and elongated variables by ap-
elongated vessels, Staal et al. presented an idea of ridge-based plying an optimization approach known as competitive
vessel segmentation in which image ridges are transformed imperialism algorithm (CIA) for vessel segmentation [38]. A
to control line elements [8]. It has recommended an adaptive new hybrid scheme is suggested by combining the existing
local multi-threshold probing algorithm. For multithreshold techniques in which multi-scale vessel enhancement
probing, they have used different thresholds in series for (MSVE), morphological operations, bottom-hat transform,
computation [8]. In the literature, it has introduced auto- and image fusion are combined for blood vessel extraction
matic vessel tree segmentation by combining shifted filter [36]. Gabor filter and Hessian method are used together for
responses (COSFIRE) [9, 10]. The literature suggests using enhancing the features. Then K-mean clustering is utilized
BCOSFIRE and generalized matrix learning vector quanti- for vessel extraction [39]. A new improved curvelet trans-
zation (GMLVG) to detect the blood vessel [11]. Mapayi form technique is suggested to detect thick and thin blood
et al. have discussed and compared vessel segmentation vessels for extraction [40]. A hybrid method by combining
based on global thresholding [12]. Many filters are intro- two different existing techniques such as lateral inhibition
duced by various researchers for retinal blood vessel seg- and differential evolution is used for vessel segmentation
mentation, such as median, Gaussian, matched, Gabor, [41]. Existing supervised and unsupervised machine learning
Cake, steerable, Frangi, and many more [5, 13–19]. techniques are utilized for vessel segmentation by employing
Afterwards, many extensions of the existing techniques image features [42]. To enhance the performance of the
in several directions are recommended for blood vessel original Frangi filter, it is combined with the existing
segmentation by various researchers [20–22]. Few of the probabilistic patch-based denoiser for vessel segmentation
various extended filters are discussed. The original median [43].
filter is extended as an improved median filter (IMF), hybrid Newly deep learning that is a supervised approach has
median filter (HMF), and weighted median filter (WMF) for been effectively employed for biomedical image processing
vessel segmentation [23]. Several expansions of matched that includes retinal blood vessel segmentation. Wang et al.
filter (MF) are utilized; for example, MF is integrated with have suggested context spatial U-Net for the segmentation of
pulse coupled neural networks, and the Otsu algorithm is blood vessels [44, 45]. Chen et al. have discussed many deep
applied for segmentation [24]. It has been suggested in the learning approaches for vessel segmentation in their review
literature to improve the matched filter through the ant paper, where better results are achieved [46]. Many machine
colony algorithm and through Clifford matched filter learning algorithms are available in the literature for various
[25, 26]. A zero mean Gaussian matched filter is introduced disease detections [47–52]. However, deep learning appli-
based on a first-order derivative of the Gaussian filter [27]. cations depend on an enormously huge database. Moreover,
An upgraded version of the matched filter is suggested annotated data sets are not readily available compared to
through an optimization technique [28, 29]. The matched other imaging fields. Annotation of medical data is a costly,
filter is upgraded through another optimization technique, complicated, and lingering process and thus experts need
that is, genetic algorithm [30]. Another recommended way more time.
to improve the matched filter is using particle swarm op- Additionally, an annotation may not always be possible
timization [31, 32]. for rare health issues. Consequently, the availability of
Correspondingly, for the Gabor filter, there are many medical data is a significant obstacle for deep learning ap-
expansions presented in the literature. A multi-scale, multi- proaches. Although deep learning methods have achieved
directional Gabor wavelet transform and created feature substantial achievement, decent theory for deep learning
vector consisting of pixel intensity and maximum response algorithms is still absent. Models of deep learning offer good
achieved for Gabor filter at various scales are recommended. results, and the researchers are utilizing continuously de-
Afterward, they utilized a classification algorithm known as prived of having an understandable knowledge of attaining
linear minimum squared error (LMSE) [33]. Two-dimen- higher results and the work process. Another critical chal-
sional Gabor wavelet with a Gaussian mixture model is lenge is the legal association of black box utility. It can be a
Computational Intelligence and Neuroscience 3
barrier because healthcare experts would not depend on it. If assessing output quality are not available. Furthermore,
the results achieved are wrong, then who could be ac- usually enhancement algorithms rely on authentic param-
countable. Because of this sensitive issue, hospitals may not eter selection. This prompted a recommendation a robust
be convenient with the black box, that is, how it could draw enhanced Gabor filter by integrating it with various en-
that particular result from the ophthalmologist. hanced techniques such as gamma corrected adaptively with
Therefore, understanding deep learning techniques and distributed weights (GCADW) [55, 56], homomorphic filter
their hidden layers working for a given problem is a great [57, 58], joint equalization of histogram (JEH) [59, 60],
challenge for researchers. Furthermore, in the event of the unsharp masking filter, adaptive unsharp masking filter
source of data changes, the problem occurs in network [61, 62], and particle swarm optimization (PSO) based
response, which most researchers do not address. That will unsharp masking filter [63, 64].
be the influence of modification in a data acquirement device Additionally, it is noted in the literature survey that the
because this may give on to variations in features of images researchers have combined two to three existing techniques
like colour intensity levels or illumination. Thus, the absence for the improvement of original approaches. In this work, we
of generalize ability will harm the performance of deep have presented an idea to improve blood vessel segmenta-
learning networks. Accordingly, it is concluded that deep tion through illumination-robust Gabor filter by combining
learning networks still deliver higher performance results it with six enhancement techniques. The main contributions
depending on huge image databases. Consequently, it needs of the suggested approach are covered in few steps as follows:
large storage and memory with excess training time for the
(a) Initially, the existing Gabor filter is used to enhance
networks. The insufficient availability of large biomedical
the fundus image, and hysteresis thresholding is
imaging data sets is another hurdle in developing a deep
applied for vessel segmentation.
learning network [53, 54].
Consequently, to enhance the performance of any model (b) In the second step, different enhancement tech-
whether supervised or unsupervised, the quality of the image niques are combined individually with the Gabor
has a great impact on the performance model. Few factors in filter to make its illumination robust and to improve
the image like uneven illumination or camera position can its performance followed by hysteresis thresholding
affect the image contrast, resulting in inadequate features in for vessel segmentation.
the image. Thus, image enhancement is a very important (c) In the final postprocessing step, a morphological
part of preprocessing and the proper selection of en- cleaning operation is performed to clean undesired
hancement techniques can improve the effectiveness of the pixels that may lead to more false positives.
existing models to a great extent. As a consequence, it is
The suggested methods are assessed on DRIVE and
essential to research the relationship between image en-
CHASE_DB1data sets, and based on the results, the best-
hancement and the existing models. Thus, in this work, an
integrated model is finalized.
unsupervised approach, that is, traditional Gabor filter, is
Table 1 shows the summary of the advantages and
chosen to improve its performance by employing various
disadvantages of various vessel segmentation methods.
enhancement techniques.
Six different enhancement algorithms are used in the
proposed work. The advantages and disadvantages of a 2. Preliminary Concepts
particular enhancement algorithm are difficult to describe
because of the reliable and consistent measures for the 2.1. Gabor Filter. Gabor filters are influential techniques that
evaluation of the superiority of the enhanced image. Thus, have been extensively utilized for multi-scale and multi-
based on the experimental results, the best-integrated model directional analysis in image processing. Because of its di-
is derived. rectional selectiveness ability to detect oriented features, it is
After an extensive study of the literature, it is noted that extended by proposed fine-tuning. As a result, precise fre-
many existing techniques are taken into consideration for quencies and scale are shown in filter performance as low-
modifying and improving their performances. Therefore, level oriented edge discriminators. The features from the
existing methods can still be considered for fundus image Gabor filter can be extracted from the original image as
segmentation by upgrading them and boosting their com- described below [34]:
putational ability. Gabor filters are found to be effectively G(x, y) � f(x, y) ⊗ Pf (x, y), (1)
suitable in the segmentation of retinal images because of
oriented features as the vessels of the retina are linked and where f(x, y) is the original image and Pf (x, y) is the impulse
piecewise linear [34]. Furthermore, Gabor filters can be response of the 2-D Gabor filter. The symbol ⊗ represents
adjusted to particular frequencies and thus can be adjusted the convolution sum.
to enhance the blood vessel. Although from the literature it is
observed that there are many techniques available for vessel
extraction utilizing various filters and enhancement 2.2. Gamma Corrected Adaptively with Distributed Weights
methods, still, a lot can be done to ameliorate further. (GCADW). GCADW utilizes cumulative distribution
Designing a particular enhancement technique is infeasible, function (cdf ) and employed normalized gamma function to
as it generates a visual artifact-free output. Selecting a it. They have achieved a modified transformation curve
specific enhancement scheme is hard since parameters in where histogram statistics are available. Accordingly,
4 Computational Intelligence and Neuroscience
substantial adjustment can be done in the lower gamma c � 1− cdfw (l). (6)
parameter. Thus, they have formulated adaptive gamma
correction (AGC) to process intensity in consecutive in-
crements of the original trend. AGC is defined as follows: 2.3. Homomorphic Filter. Many suggested approaches are
c 1− cdf(l)
available to enhance images utilizing a homomorphic filter
l l [57]. Information missing in dark regions can be identified
T(l) � lmax � lmax , (2)
lmax lmax by equalizing the light variations onto the image. An image
can be denoted as a product of two components as seen in
where l is the intensity of the input image, lmax is the the following equation:
maximum intensity of the input and c is the varying adaptive
parameter. The low intensity can be increased substantially I(x, y) � L(x, y) ∗ R(x, y), (7)
without decreasing the high intensity by applying AGC
where L (x, y) is the illumination and R (x, y) is the re-
technique. Additionally, weighting distribution (WD)
flectance components of the original image. The filter
function applied is also employed for the modification of
function for the homomorphic filter chosen is as follows:
statistical histogram to some extent for the reduction of
adverse effect. The WD function is defined as follows: ⎨ P(u, v) 2 ⎫
⎧ ⎬
a H(u, v) � ch − cl ⎡⎣1 − exp⎩ k ⎭ ⎤⎦ + cl , (8)
pdf(l) − pdfmin P0
pdfw (l) � pdfmax , (3)
pdfmax − pdfmin
where k controls the steepness and is taken as constant, P0 is
where a is an adjustable parameter, pdfmax is the probability the frequency of cut-off value, the measured distance of the
density function with a maximum value of the statistical origin Fourier transform is represented as P(u, v), and cl , ch
histogram, and pdfmin is the probability density function are the low- and high-frequency gain, respectively.
with minimum value. Considering the (3), the revised cdf is
approached as follows: 2.4. Joint Equalization of Histogram. Joint histogram
l
max pdfw (l) equalization is an approach where modification of histo-
cdfw (l) � l�0 , (4)
grams and enhancement of contrast in digital images are
pdfw
implemented [59]. The entire joint histogram equalization
where the sum pdfw is computed as follows: process is explained below.
lmax By using a neighbouring window of Z2 , the gray value
pdfw � pdfw (l). (5) pixel g(p, q) is calculated and defined below:
l k k
1
In conclusion, the value of parameter gamma derived g(p, q) � f(p + m, q + n). (9)
z × z m�−k n�−k
from cdf equation (3) is altered as follows:
Computational Intelligence and Neuroscience 5
The joint histogram is as follows: green, and blue components of the colour image,
respectively.
H � {h(a, b) | 0 ≤ a ≤ C − 1, 0 ≤ b ≤ C − 1}, (10)
Thus, to gain adjustment on the detected edge, the
where the expression h(a, b) represents the existence of the subsequent scheme is applied.
gray level pair numbers f(p, q) and g(p, q) around the λdpq � 0.51 + tanh3 − 6 × duv − 0.5, (14)
correspondent spatial location (p, q) of the images I and I
correspondingly. It signifies the count function. Because a where λdpq is gain factor defining strength of reconstructed
and b is considered whatever conceivable numeral value edge dpq .
among 0 and C − 1, the number of pixel pair groupings By multiplying the above two schemes, the complete gain
feasible are C × C. Thus, the joint histogram H will comprise adjustment scheme is obtained and described as follows:
C × C entries.
By utilizing the count function, the cumulative distri- λpq � λgpq λdpq . (15)
bution function can be achieved as follows:
Additionally, the measurement of sharpness is evaluated.
i j
It is computed from the measurement of the neighbourhood
CDF(p, q) � h(m, n). (11) pixel gradient as described below:
m�0 n�0 �����������
Two-dimensional CDF value is utilized to produce the Ğpq � Δx2pq + Δy2pq , (16)
output pixel intensity enhanced in contrast. The equalized
value of the intensity pairs (p and q) in the output image can 1
G� Ğpq , ∀pq, (17)
be achieved through the histogram equalization method as N
follows:
where Δxpq � gpq − gp+1,q and Δypq � gpq − gp,q+1 repre-
L−1 sent the horizontal and vertical gradients across the image,
heq (p, q) � round CDF(p, q) − CDF (p, q)min .
MN − 1 respectively, and N is the total number of pixels.
(12) Additionally, an image is evaluated by its colourfulness
[65]. Capturing an object under uneven lighting conditions
may deteriorate the measurement. The colourfulness is given
2.5. Unsharp Masking Filter. Local contrast enhancement as below:
can be done using unsharp masking. This technique creates a ∁ � σ RGYB + 0.3 × μRGYB , (18)
mask of the original image utilizing a negative image. Af-
terward, the original positive image is combined with the where
unsharp mask to produce an image that is less blurry than ��������
the original. Usually, a linear or nonlinear filter that mag- σ RGYB � σ 2RG + σ 2YB , (19)
nifies the high-frequency components of a signal is said to be ��������
an unsharp masking filter. μRGYB � μ2RG + μ2YB . (20)
Segmented
Original Hysteresis Post image
Thresholding procesing
GCADW
Enhancement
Original image
Homomorphic
Enhancement Segmented
image
Gabor Post
filter Hysteresis procesing
Thresholding
Joint
histogram
equalisation
Unsharp
masking
Adaptive
unsharp
masking
PSO unsharp
masking
Figure 2: Images generated for retina 2 of the DRIVE data set by employing various enhancement techniques: (a) original, (b) green
channel, (c) Gabor enhanced, (d) GCADW enhanced, (e) homomorphic filter enhanced, (f ) JEH enhanced, (g) unsharp masking filter
enhanced, (h) adaptive unsharp masking filter enhanced, and (i) PSO unsharp masking filter enhanced.
Figure 3: Images generated for retina 4 of the DRIVE data set by employing various enhancement techniques: (a) original, (b) green
channel, (c) Gabor enhanced, (d) GCADW enhanced, (e) homomorphic filter enhanced, (f ) JEH enhanced, (g) unsharp masking filter
enhanced, (h) adaptive unsharp masking filter enhanced, and (i) PSO unsharp masking filter enhanced.
retina 5 of the CHASE_DB1 data set, respectively. specific contrast-enhanced method that produces an output
Figures 2(c) and 3(c) illustrate the Gabor enhanced images of of free visual artifacts. Moreover, for verifying the quality of
retina 2 and retina 4 of the DRIVE data set and retina 5 of the the output of the image, there are no particular reliable
CHASE_DB1 data set, respectively. Afterward, hysteresis measures available in the literature. Therefore, choosing an
thresholding followed by morphological operation for appropriate algorithm to enhance the images is challenging.
cleaning is applied, and segmented images are obtained. All Accordingly, six enhancement methods are selected to
the parameter values are chosen on an experimental basis. improve the Gabor features. The algorithms of the suggested
methods are explained below:
3.2. Various Enhancement Techniques. In the subsequent (i) GCADW enhancement is computed with three vital
step, investigations are carried out with the suggested ap- steps. The detailed mathematical computations of
proaches by integrating the enhanced methods with the GCADW are described in (2)–(4) and (6). The steps
Gabor filter. Generally, it is difficult to recommend one of GCADW are summarized as follows:
8 Computational Intelligence and Neuroscience
Figure 4: Images generated for retina 5 of the CHASE_DB1 data set by employing various enhancement techniques: (a) original, (b) green
channel, (c) Gabor transformed, (d) GCADW enhanced, (e) homomorphic filter enhanced, (f ) JEH enhanced, (g) unsharp masking filter
enhanced, (h) adaptive unsharp masking filter enhanced, and (i) PSO unsharp masking filter enhanced.
(a) Consider the image that is to be enhanced (iii) JEH enhanced technique deals with level pair of
(b) Analyze the histogram of the image intensity and defined by the field of count as
(c) In the next step, employ weighting distribution explained in (10), in which 256 × 256 is the order of
(d) Finally, apply gamma correction and obtain the the matrix. Utilizing (12), the equalized joint his-
enhanced image togram is computed, and improved enhanced im-
ages are produced. The description of the JEH
Figures 2(d) and 3(d) represent the GCADW en-
enhanced technique is as follows:
hanced images of retina 2 and retina 4 of the DRIVE
data set and retina 5 of the CHASE_DB1 data set, Figure 5(a) represents the intensities of a grayscale
respectively. subimage k of 8 bit and size 6 × 6. By utilizing (11),
the average subimage M is accomplished, and
(ii) The second enhancement technique is the homo-
Figure 5(b) represents this. The size of the window is
morphic filter. The steps of the homomorphic filter
considered as three because the higher size window
are described below:
may blur the image, and according to the location,
(a) The multiplicative component described in (7) the pixel pairs are generated. For instance, both
is converted to an additive component by ap- input and average images are represented in a pixel
plying the logarithm function pair (1, 1) with values (111, 76). Among the pixel
(b) Apply Fourier transform on retinal images for pairs, the minimum and maximum value specified
converting the images into frequency-domain by CDF is (109, 81) and (167, 152), respectively. The
transformation joint equalized histogram value is achieved by (12).
(c) The transformed retinal images are processed For example, the CDF of (140, 139) pixel pair is 11.
through a homomorphic filter function as de- The histogram equalized is calculated as follows:
scribed in (8)
(d) Take the inverse Fourier transform to obtain 11 − 1
heq (140, 139) � round × 255
homomorphic filtered enhanced retinal images 35 (24)
The condition λh > λl > 0 must be followed while � round(0.285 × 255) � 72.
selecting the values of low- and high-frequency
components. The soft edge and detail information
may be eliminated if too small values are chosen for In the original subimage, the intensity value is
λl. On the contrary, noise contained in high fre- substituted as 140, that is, M at every occurrence of
quency may increase if large values of λh are chosen. the pixel pair (140, 139). In the rest of the original
In this work, the values are chosen as h � 0.8 and subimage spaces, the pixel pairs such as (140, 141)
λl � 0.6, respectively. Figures 2(e), 3(e), and 4(e) the value (140) is not substituted. In a similar
represent the homomorphic filter enhanced images manner, the rest of the equalized joint histogram
of retina 2 and retina 4 of the DRIVE data set and values are computed. Figures 2(f ) and 3(f ) represent
retina 5 of the CHASE_DB1 data set, respectively. the JEH enhanced images of retina 2 and retina 4 of
Computational Intelligence and Neuroscience 9
111 121 149 167 109 149 76 103 133 152 81 120
151 138 136 147 115 121 121 113 100 126 102 101
121 138 149 127 139 148 103 123 127 98 110 137
120 140 125 117 125 122 96 141 112 99 109 107
113 120 165 143 137 132 98 90 144 130 135 107
140 115 142 115 133 137 139 99 115 88 106 110
(a) (b)
Figure 5: Description of joint equalization of histogram: (a) subimage representation and (b) average subimage representation.
the DRIVE data set and retina 5 of the CHASE_DB1 (i) Update the global best solution and particle
data set, respectively. motion
(iv) The unsharp mask enhancement filter regulates the (j) Particle position is updated until maximum
edge contrast and produces the illusion of a very iteration is touched
intensive image. Thus, unsharp masking results in (k) Return optimum solution, that is, global best
edge image g(m, n) transformed as a derivative of solution
an input image f(m, n) as given below: (l) Finally, the edge extraction kernel and the
augmentation gain factor are tuned using the
g(m, n) � f(m, n) − fsmooth (m, n), (25) PSO optimizer to yield contrast-enhanced im-
ages with minimum over-range artifacts.
where fsmooth (m, n) is a smooth version of f(m, n).
Figures 2(i) and 3(i) represent the PSO unsharp masking
The final sharpening image obtained through un- filter enhanced images of retina 2 and retina 4 of the DRIVE
sharp masking is given as follows: data set and retina 5 of the CHASE_DB1 data set,
fsharp (m, n) � f(m, n) + k ∗ g(m, n), (26) respectively.
Figures 4(a) and 4(b) represent the original image and
where k is a scaling constant. Values of k vary green channel image of retina 2 and retina 4 of the DRIVE
between 0.2 and 0.7, with the higher values pro- data set, and retina 5 of the CHASE_DB1 data set, re-
viding growing amounts of sharpening. spectively. Figure 4(c) represents the Gabor enhanced im-
ages of retina 2 and retina 4 of the DRIVE data set and retina
Figures 2(g) and 5(g) illustrate the unsharp masking
5 of the CHASE_DB1 data set. Figure 4(d) illustrates the
filter enhanced images of retina 2 and retina 4 of the
GCADW enhanced images of retina 2 and retina 4 of the
DRIVE data set and retina 5 of the CHASE_DB1
DRIVE data set and retina 5 of the CHASE_DB1 data set.
data set, respectively.
Figure 4(f ) represents the JEH enhanced images of retina 2
(v) The adaptive unsharp masking enhancement ap- and retina 4 of the DRIVE data set and retina 5 of the
proach enhances the quality of the image with CHASE_DB1 data set. Figure 4(g) illustrates the unsharp
regard to the information volume, sharpness, and masking filter enhanced images of retina 2 and retina 4 of the
colourfulness by using equations (17), (18), and (20) DRIVE data set and retina 5 of the CHASE_DB1 data set.
as described in Section 2.6. Figures 2(h), 3(h), and Figure 4(i) shows the PSO unsharp masking filter enhanced
4(h) represent the adaptive unsharp masking filter images of retina 2 and retina 4 of the DRIVE data set and
enhanced images of retina 2 and retina 4 of the retina 5 of the CHASE_DB1 data set.
DRIVE data set and retina 5 of the CHASE_DB1 Figures 6–8 display the images of the retina 2 and retina 4
data set, respectively. of the DRIVE and retina 5 of the CHASE_DB1databases
(vi) The algorithm for designing PSO-based unsharp achieved from each enhancement technique integrated with
masking filter enhancement technique is given the Gabor filter. From all the figures, it is distinctly no-
below: ticeable that PSO unsharp masking filter integrated with the
Gabor filter generates noise-free enhanced image in which
(a) RGB input image
the thick and thin vessels are visible.
(b) RGB colour space is converted to HSV colour
space
(c) PSO iteration count is set as zero 4. Results and Discussion
(d) Consider kernel element and gain as a particle
and initialize the particle The proposed idea is analyzed and examined on DRIVE
(e) Repeat (Digital Retinal Images for Vessel Extraction) and
(f ) kernel is generated from each particle CHASE_DB1 (Child Heart and Health Study in England)
(g) Unsharp masking filter operation is carried databases. The DRIVE data set contains 20 coloured
(h) Compute entropy penalized by over-range fundus images in each training and testing set, an
ration equivalent set of masks, and two manually segmented sets.
10 Computational Intelligence and Neuroscience
(e) (f ) (g)
Figure 6: Images generated for retina 2 of the DRIVE data set by integrating the Gabor filter with different enhancement techniques:
(a) original Gabor transformed image, (b) GCADW integrated with the Gabor filter, (c) homomorphic filter integrated with the Gabor filter,
(d) JEH integrated with the Gabor filter, (e) unsharp masking filter integrated with the Gabor filter, (f ) adaptive unsharp masking filter
integrated with the Gabor filter, and (g) PSO unsharp masking filter integrated with the Gabor filter.
(e) (f ) (g)
Figure 7: Images generated for retina 4 of the DRIVE data set by integrating Gabor filter with different enhancement techniques: (a) original
Gabor transformed image, (b) GCADW integrated with the Gabor filter, (c) homomorphic filter integrated with the Gabor filter, (d) JEH
integrated with the Gabor filter, (e) unsharp masking filter integrated with the Gabor filter, (f ) adaptive unsharp masking filter integrated
with the Gabor filter, and (g) PSO unsharp masking filter integrated with the Gabor filter.
The first manual segmented image that first ophthal- images of the test data set are utilized for analysis pur-
mologist provides is preserved as the ground truth image. poses. The CHASE_DB1 data set consists of ground truth
In supervised models for training the network, the images of left and right eyes taken from 28 children.
training data set is generally utilized. The test data set is Comparisons between segmented and ground truth image
utilized for the computation in this work as the proposed are verified using metrics, that is, sensitivity (Sen), ac-
method is an unsupervised method. The ground truth curacy (Acc), and specificity (Sp).
Computational Intelligence and Neuroscience 11
(e) (f ) (g)
Figure 8: Images generated for retina 5 of the CHASE_DB1 data set by integrating Gabor filter with different enhancement techniques:
(a) original Gabor transformed image, (b) GCADW integrated with the Gabor filter, (c) homomorphic filter integrated with the Gabor filter,
(d) JEH integrated with the Gabor filter, (e) unsharp masking filter integrated with the Gabor filter, (f ) adaptive unsharp masking filter
integrated with the Gabor filter, and (g) PSO unsharp masking filter integrated with the Gabor filter.
Table 2: Performance matrices results of original Gabor filter on Table 3: Performance matrices results of original Gabor filter on
the DRIVE database. the CHASE_DB1 database.
Fundus images Sen Acc Sp Fundus images Sen Acc Sp
FI01 0.677887 0.918439 0.942004 FI01 0.710047 0.904558 0.930882
FI02 0.664339 0.925255 0.955022 FI02 0.682305 0.902526 0.926700
FI03 0.672271 0.904431 0.930137 FI03 0.667443 0.916788 0.950170
FI04 0.570534 0.934659 0.971549 FI04 0.699412 0.918843 0.941316
FI05 0.639169 0.927534 0.957341 FI05 0.692741 0.901190 0.932334
FI06 0.656184 0.921006 0.949561 FI06 0.682634 0.917574 0.939770
FI07 0.657170 0.928000 0.955230 FI07 0.695286 0.904910 0.930149
FI08 0.666138 0.914571 0.937958 FI08 0.695286 0.904910 0.930149
FI09 0.686661 0.916338 0.936594 FI09 0.678926 0.921164 0.950061
FI10 0.634372 0.932265 0.95898 FI10 0.691368 0.919804 0.937123
FI11 0.634372 0.922985 0.953728 FI11 0.690519 0.904957 0.936737
FI12 0.634372 0.918548 0.939569 FI12 0.691258 0.919311 0.917941
FI13 0.634372 0.922872 0.956796 FI13 0.696819 0.910267 0.933915
FI14 0.634372 0.912605 0.928476 FI14 0.692166 0.910005 0.932175
FI15 0.634372 0.931861 0.951623 Average value 0.690443 0.911200 0.934958
FI16 0.634372 0.918399 0.946777
FI17 0.634372 0.917223 0.943944
FI18 0.634372 0.915814 0.938357 where TP: true positive states as the correct identification of
FI19 0.634372 0.922488 0.944618 a vessel, TN: true negative states as the correct identification
FI20 0.634372 0.925097 0.943015 of a background, FP: false positive states as incorrect
Average value 0.643422 0.921519 0.947063 identification of a vessel, and FN: false negative states as
Note: FI represents the number of fundus images of the corresponding incorrect identification of a background.
database.
The measure of ability for verifying the correct vessel
pixel is known as sensitivity. In contrast, the measure of the
TP
Sen � , ability to verify accurate nonvessel pixels is known as
(TP + FN) specificity, and the accuracy displays the conventionality of
(TP + TN) the segmentation result.
Acc � , (27)
(TP + TN + FP + FN) Tables 2 and 3 present the performance of blood vessel
TN segmentation using the original Gabor filter in terms of Sen,
Sp � , Acc, and Sp for DRIVE and CHASE_DB1. Traditional Gabor
(FP + TN)
filter delivers Sen, Acc, and Sp as 0.6434, 0.9215, and 0.9470,
12 Computational Intelligence and Neuroscience
Table 4: Performance matrices results of GCADW integrated with Table 6: Performance matrices results of homomorphic filter in-
the Gabor filter on the DRIVE database. tegrated with the Gabor filter on the DRIVE database.
Fundus images Sen Acc Sp Fundus images Sen Acc Sp
FI01 0.679450 0.938196 0.947585 FI01 0.642459 0.910447 0.944741
FI02 0.674608 0.925476 0.954097 FI02 0.634353 0.931709 0.944494
FI03 0.678351 0.929122 0.954675 FI03 0.628193 0.924788 0.959878
FI04 0.673793 0.935613 0.970244 FI04 0.626216 0.928484 0.952147
FI05 0.684191 0.929685 0.961264 FI05 0.638631 0.937319 0.958194
FI06 0.643044 0.923909 0.954194 FI06 0.649660 0.944162 0.952682
FI07 0.644136 0.938770 0.957396 FI07 0.634286 0.939462 0.944454
FI08 0.658671 0.932624 0.947472 FI08 0.641661 0.932704 0.947472
FI09 0.677910 0.923463 0.945119 FI09 0.631430 0.925241 0.940213
FI10 0.630137 0.930834 0.957801 FI10 0.634540 0.923662 0.944142
FI11 0.631023 0.924642 0.955479 FI11 0.636221 0.946333 0.947808
FI12 0.676946 0.934908 0.945506 FI12 0.631881 0.929728 0.950315
FI13 0.612728 0.921469 0.937949 FI13 0.631964 0.926854 0.942059
FI14 0.684116 0.912605 0.928476 FI14 0.648458 0.921683 0.948679
FI15 0.667044 0.930416 0.948405 FI15 0.638458 0.921683 0.958679
FI16 0.620389 0.921463 0.951344 FI16 0.639647 0.935233 0.933576
FI17 0.626346 0.937751 0.944616 FI17 0.64563 0.911425 0.936851
FI18 0.664990 0.932448 0.943742 FI18 0.631460 0.937980 0.940054
FI19 0.664718 0.930584 0.953729 FI19 0.645855 0.921394 0.943233
FI20 0.696847 0.948388 0.945688 FI20 0.644290 0.934790 0.943087
Average value 0.659471 0.930118 0.950239 Average value 0.637764 0.929254 0.946630
Table 5: Performance matrices results of GCADW integrated with Table 7: Performance matrices results of homomorphic integrated
the Gabor filter on the CHASE_DB1 database. with the Gabor filter on the CHASE_DB1 database.
Fundus images Sen Acc Sp Retinal images Sen Acc Sp
FI01 0.715652 0.921977 0.945813 FI01 0.644795 0.949065 0.936870
FI02 0.704742 0.917989 0.929782 FI02 0.643583 0.928927 0.933554
FI03 0.695609 0.920585 0.926741 FI03 0.636338 0.926789 0.936815
FI04 0.699744 0.890291 0.939458 FI04 0.639947 0.917553 0.943018
FI05 0.693052 0.928501 0.938527 FI05 0.648564 0.913205 0.947289
FI06 0.705556 0.915255 0.935383 FI06 0.645292 0.913339 0.937429
FI07 0.708359 0.924627 0.934857 FI07 0.655333 0.919822 0.934125
FI08 0.693451 0.911141 0.934172 FI08 0.653901 0.905724 0.930070
FI09 0.696969 0.926043 0.953173 FI09 0.650733 0.919650 0.952101
FI10 0.707692 0.924176 0.948024 FI10 0.657497 0.905404 0.938351
FI11 0.708923 0.919948 0.941466 FI11 0.648734 0.904013 0.938093
FI12 0.704802 0.923779 0.940930 FI12 0.649968 0.914724 0.944650
FI13 0.713273 0.928199 0.934405 FI13 0.640965 0.911713 0.937150
FI14 0.710256 0.927990 0.923165 FI14 0.669859 0.914765 0.930609
Average value 0.704148 0.920035 0.937564 Average value 0.648964 0.917478 0.93858
respectively, on DRIVE and 0.6904, 0.9112, and 0.9349, the Gabor filter for blood vessel segmentation. The inte-
respectively, on CHASE_DB1 databases. Tables 4 and 5 grated suggested approach accomplishes Sen, Acc, and Sp as
summarize the performance of GCADW integrated with 0.6846, 0.9505, and 0.9620, respectively, on the DRIVE
the Gabor filter for blood vessel segmentation. The inte- database and 0.7365, 0.9411, and 0.9501, respectively, on the
grated proposed method attains Sen, Acc, and Sp as 0.6594, CHASE_DB1 database. Tables 10 and 11 summarize the
0.9301, and 0.9502, respectively, on the DRIVE database and performance of the unsharp masking filter integrated with
0.7041, 0.9200, and 0.9375, respectively, on the CHASE_DB1 the Gabor filter for blood vessel segmentation. The inte-
database. Tables 6 and 7 summarize the performance of the grated recommended technique accomplishes Sen, Acc, and
homomorphic filter combined with the Gabor filter for Sp as 0.6408, 0.9512, and 0.9528, respectively, on the DRIVE
blood vessel segmentation. The integrated proposed method database and 0.6757, 0.9132, and 0.9334, respectively, on the
attains Sen, Acc, and Sp as 0.6377, 0.9292, and 0.9466, re- CHASE_DB1 database. Tables 12 and 13 summarize the
spectively, on the DRIVE database and 0.6489, 0.9174, and performance of the adaptive unsharp masking filter inte-
0.9385, respectively, on the CHASE_DB1 database. Tables 8 grated with the Gabor filter for blood vessel segmentation.
and 9 summarize the performance of JHE integrated with The integrated proposed method accomplishes Sen, Acc, and
Computational Intelligence and Neuroscience 13
Table 8: Performance matrices results of JEH integrated with the Table 10: Performance matrices results of unsharp masking filter
Gabor filter on the DRIVE database. integrated with the Gabor filter on the DRIVE database.
Fundus images Sen Acc Sp Retinal images Sen Acc Sp
FI01 0.667269 0.948305 0.955836 FI01 0.642745 0.959207 0.956494
FI02 0.679432 0.945257 0.961289 FI02 0.634238 0.958333 0.950477
FI03 0.692271 0.954431 0.950137 FI03 0.642644 0.949351 0.954470
FI04 0.680534 0.944659 0.971549 FI04 0.638033 0.958249 0.953599
FI05 0.699169 0.937534 0.957341 FI05 0.632061 0.958413 0.952844
FI06 0.686180 0.951006 0.969561 FI06 0.649474 0.951926 0.956991
FI07 0.697170 0.948547 0.955238 FI07 0.630450 0.942967 0.953198
FI08 0.696138 0.944571 0.957958 FI08 0.637316 0.948850 0.957821
FI09 0.696661 0.956338 0.966594 FI09 0.648731 0.953721 0.950917
FI10 0.634372 0.952265 0.95898 FI10 0.639051 0.941742 0.940817
FI11 0.680312 0.962985 0.953728 FI11 0.640097 0.958853 0.962328
FI12 0.696104 0.958548 0.959569 FI12 0.632187 0.946134 0.952573
FI13 0.669814 0.952872 0.956796 FI13 0.659457 0.958680 0.934602
FI14 0.732166 0.962605 0.968476 FI14 0.645615 0.951776 0.956392
FI15 0.675489 0.951861 0.961623 FI15 0.636682 0.959714 0.95736
FI16 0.682473 0.958399 0.976777 FI16 0.644005 0.955084 0.956033
FI17 0.677388 0.947223 0.963944 FI17 0.637276 0.949379 0.954395
FI18 0.67384 0.945814 0.968357 FI18 0.648286 0.942240 0.959790
FI19 0.677834 0.942488 0.964618 FI19 0.633002 0.925624 0.948454
FI20 0.699361 0.945097 0.963015 FI20 0.645038 0.955596 0.948340
Average value 0.684698 0.950540 0.962060 Average value 0.640819 0.951291 0.952894
Table 9: Performance matrices results of JEH integrated with the Table 11: Performance matrices results of unsharp masking filter
Gabor filter on the CHASE_DB1 database. integrated with the Gabor filter on the CHASE_DB1 database.
Fundus images Sen Acc Sp Fundus images Sen Acc Sp
FI01 0.745752 0.945233 0.950308 FI01 0.641601 0.914316 0.933804
FI02 0.712943 0.943283 0.941410 FI02 0.670786 0.924334 0.932685
FI03 0.722359 0.938871 0.944343 FI03 0.659763 0.920059 0.931729
FI04 0.734211 0.941279 0.945253 FI04 0.672605 0.914752 0.936606
FI05 0.730888 0.930656 0.947073 FI05 0.672497 0.915543 0.939502
FI06 0.744306 0.938812 0.953431 FI06 0.675362 0.911253 0.935985
FI07 0.739790 0.944063 0.949192 FI07 0.669027 0.902999 0.922259
FI08 0.747950 0.938944 0.949204 FI08 0.698382 0.908980 0.934595
FI09 0.734789 0.946909 0.954734 FI09 0.686771 0.912538 0.931902
FI10 0.748618 0.935006 0.944861 FI10 0.698150 0.914770 0.936312
FI11 0.731055 0.946087 0.959027 FI11 0.680958 0.915039 0.936026
FI12 0.746762 0.946433 0.956740 FI12 0.698290 0.903039 0.931026
FI13 0.735172 0.944196 0.957555 FI13 0.669055 0.913874 0.932498
FI14 0.736517 0.936948 0.949115 FI14 0.667872 0.914562 0.933582
Average value 0.736508 0.941194 0.950160 Average value 0.675794 0.913289 0.933460
Table 12: Performance matrices results of adaptive unsharp Table 14: Performance matrices results of PSO unsharp masking
masking filter integrated with the Gabor filter on the DRIVE filter integrated with the Gabor filter on the DRIVE database.
database.
Fundus images Sen Acc Sp
Fundus images Sen Acc Sp FI01 0.746162 0.965569 0.979961
FI01 0.728404 0.952835 0.958943 FI02 0.748538 0.958730 0.977710
FI02 0.724877 0.947196 0.962225 FI03 0.730468 0.958861 0.975244
FI03 0.702186 0.953249 0.964470 FI04 0.748574 0.955770 0.987287
FI04 0.716533 0.950525 0.968415 FI05 0.748794 0.959573 0.980304
FI05 0.711223 0.95876 0.958583 FI06 0.749458 0.954015 0.980577
FI06 0.719649 0.950552 0.955150 FI07 0.734503 0.956558 0.979877
FI07 0.717170 0.947410 0.965238 FI08 0.749806 0.959388 0.968673
FI08 0.736097 0.954036 0.958023 FI09 0.756975 0.962064 0.988955
FI09 0.718266 0.953270 0.960487 FI10 0.74559 0.963892 0.978787
FI10 0.728316 0.950265 0.956447 FI11 0.741653 0.957203 0.979945
FI11 0.730081 0.953082 0.956807 FI12 0.749106 0.957582 0.982605
FI12 0.723043 0.956661 0.95819 FI13 0.749206 0.959318 0.977894
FI13 0.734215 0.950947 0.968519 FI14 0.745921 0.956851 0.987419
FI14 0.707463 0.959728 0.964880 FI15 0.756153 0.958874 0.968376
FI15 0.729155 0.940718 0.962421 FI16 0.758354 0.958772 0.980668
FI16 0.710953 0.955808 0.967056 FI17 0.748113 0.958524 0.978776
FI17 0.720151 0.950332 0.952475 FI18 0.750263 0.958339 0.981894
FI18 0.713341 0.952477 0.962521 FI19 0.753979 0.967158 0.987246
FI19 0.715501 0.951121 0.954244 FI20 0.753445 0.959777 0.980127
FI20 0.728393 0.948643 0.954857 Average value 0.748200 0.959340 0.980110
Average value 0.720750 0.951880 0.960497
Start
(Read input image)
Obtain set
of particles
Compute entropy
Yes
Is it the
Store as the
global best?
global best
No
Yes
Is it a Store as a
personal personal best
best?
No
Is this last
iteration?
No
Yes
New image,
best particle
Compute Hysteresis
Thresholding
Vessel Extracted
Stop
Figure 10: Segmented images achieved for various integrated techniques for retina 2 of the DRIVE data set: (a) ground truth image,
(b) original Gabor transformed image, (c) Gabor integrated with GCADW, (d) Gabor integrated with homomorphic filter, (e) Gabor
integrated with JHE, (f ) Gabor integrated with sharpen filter, (g) Gabor integrated with adaptive sharpen filter, and (h) Gabor integrated
with PSO sharpen filter.
Figure 11: Segmented images achieved for various integrated techniques for retina 4 of the DRIVE data set: (a) ground truth image,
(b) original Gabor transformed image, (c) Gabor integrated with GCADW, (d) Gabor integrated with homomorphic filter, (e) Gabor
integrated with JHE, (f ) Gabor integrated with sharpen filter, (g) Gabor integrated with adaptive sharpen filter, and (h) Gabor integrated
with PSO sharpen filter.
filter yield better performance measures either with regard to achieved from different suggested approaches are repre-
Acc or Sen or Sp. Figure 9 illustrates the flowchart of the final sented in Figures 10–12, respectively.
algorithm of the suggested approach. The explanations of both the algorithms are as follows.
The segmented images of the retina 2 and retina 4 of the First, read the colour image and extract the green channel of
DRIVE database and retina 5 of the CHASE_DB1database the image. Next, initialize all the parameters of PSO-based
Computational Intelligence and Neuroscience 17
Figure 12: Segmented images achieved for various integrated techniques for retina 5 of the CHASE_DB1 data set: (a) Ground truth image,
(b) original Gabor transformed image, (c) Gabor integrated with GCADW, (d) Gabor integrated with homomorphic filter, (e) Gabor
integrated with JHE, (f ) Gabor integrated with sharpen filter, (g) Gabor integrated with adaptive sharpen filter, and (h) Gabor integrated
with PSO sharpen filter.
Table 16: The average results from DRIVE and CHASE_DB1 data sets were compared with some other approaches.
DRIVE CHASE_DB1
Approaches
Sen Acc Sp Sen Acc Sp
Cinsdikici and Aydın [26] — 0.929 — — — —
Zhang et al. [27] 0.712 0.938 — — — —
Rawi et al. [29] — 0.953 — — — —
Rawi and Karajeh [30] — 0.942 — — — —
Sreejini and Govindan [31] 0.713 0.963 0.986 — — —
Chaudhari et al. [32] 0.867 — — — —
Soares et al. [33] — 0.946 — — — —
Shabbir et al. [34] — 0.950 — — — —
Aguirre-Ramos et al. [35] 0.785 0.950 0.966 — — —
Yavuz and Kose [36] 0.677 0.957 0.978 — — —
Farokhian et al. [37] 0.693 0.939 0.979 — — —
Sundaram et al. [38] 0.690 0.930 0.940 0.710 0.950 0.960
Dash et al. [36] 0.756 0.952 0.981 0.770 0.950 0.970
Primitivo et al. [41] 0.846 0.961 0.970 — — —
Hashemzadeh and Azar [42] 0.783 0.953 0.980 0.773 0.962 0.984
Khawaja et al. [44] 0.802 0.956 0.973 — — —
Wang et al. [45] 0.807 0.956 0.978 0.842 0.970 0.982
Original Gabor filter 0.643 0.921 0.947 0.690 0.911 0.934
Proposed GCADW integrated with the Gabor filter 0.659 0.930 0.950 0.704 0.920 0.937
Proposed homomorphic filter integrated with the Gabor filter 0.637 0.929 0.946 0.648 0.917 0.938
Proposed JHE integrated with the Gabor filter 0.684 0.950 0.962 0.736 0.941 0.950
Proposed unsharp masking filter integrated with the Gabor filter 0.640 0.951 0.952 0.675 0.913 0.933
Proposed adaptive unsharp masking filter integrated with the Gabor filter 0.720 0.951 0.960 0.735 0.953 0.974
Proposed PSO unsharp masking filter integrated with the Gabor filter 0.748 0.959 0.980 0.759 0.961 0.984
unsharp masking filter. Utilizing the global best solution, the To better explain the proposed idea’s superiority, we
unsharp mask image is generated. In the next step, the Gabor have compared it with various state-of-the-art methods from
filter is applied, and a maximum Gabor enhanced image is the literature and given the results of experiments in Ta-
generated. In the last step, hysteresis thresholding is applied ble 16. It represents the comparison with different suggested
with morphological cleaning for the vessel extraction. approaches by using mentioned metrics: sensitivity,
18 Computational Intelligence and Neuroscience
accuracy, and specificity for models presented in Cinsdikici undesirable effects that might be led to the loss of valuable
and Aydın [26], Zhang et al. [27], Rawi et al. [29], Rawi and image information.
Karajeh [30], Sreejini and Govindan [31], Chaudhari et al. For future studies, we suggest considering different il-
[32], Soares et al. [33], Shabbir et al. [34], Aguirre-Ramos lumination normalization techniques such as small-scale
et al. [35], Yavuz and Kose [36], Farokhian et al. [37], retinex (SSR), multi-scale retinex (MSR), isotropic illumi-
Sundaram et al. [38], Dash et al. [36], Primitivo et al. [41], nation, wavelet normalization, and so on combined with
Hashemzadeh and Azar [42], Khawaja et al. [44], and Wang deep learning approaches for vessel segmentation.
et al. [45]. The results of all the proposed models are
summarized in Table 15. After a comprehensive study of Data Availability
Table 16, it concludes that among all the suggested ap-
proaches, the PSO unsharp masking filter integrated with the Publicly available data are used in this study.
Gabor filter delivers the highest accuracy, that is, 0.959 for
the DRIVE data set and 0.961 for the CHASE_DB1 data set. Conflicts of Interest
Furthermore, it is observed that the suggested method de-
livers better results than many state-of –art-of-methods and The authors declare that there are no conflicts of interest.
outperforms the existing Gabor filter technique.
Acknowledgments
5. Conclusions Jana Shafi would like to thank the Deanship of Scientific
Research, Prince Sattam bin Abdul Aziz University, for
In this work, six enhancement techniques are individually
supporting this work.
combined with the Gabor filter to improve the performance
of the standard Gabor filter. The proposed techniques are
assessed using DRIVE and CHASE_DB1 data sets. All to- References
gether six algorithms are recommended for the improve- [1] M. D. Abramoff, M. K. Garvin, and M. Sonka, “Retinal im-
ment of the traditional Gabor filter. The parameters Sen, aging and image analysis,” IEEE Reviews in Biomedical En-
Acc, and Sp are taken into account in order to determine the gineering, vol. 3, pp. 169–208, 2010.
best algorithm. Experimental results are compared with [2] G. Liew and J. J. Wang, “Retinal vascular signs: a window to
state-of-the-art models. It is observed that the homomorphic the heart?” Revista Española de Cardiologı́a, vol. 64, no. 6,
filter and unsharp masking filter integrated with the Gabor pp. 515–521, 2011.
filter underperforms compared to the standard Gabor filter [3] K. Narasimhan, V. C. Neha, and K. Vijayarekha, “Hyper-
in terms of sensitivity on the DRIVE database. Similarly, tensive retinopathy diagnosis from fundus images by esti-
homomorphic and unsharp masking filters combined with mation of Avr,” Procedia Engineering, vol. 38, pp. 980–993,
2012.
the Gabor filter underperform the standard Gabor filter in all
[4] B. Al-Diri, A. Hunter, and D. Steel, “An active contour model
performance measures on the CHASE_DB1 database. The for segmenting and measuring retinal vessels,” IEEE Trans-
best results are attained with a PSO unsharp masking filter actions on Medical Imaging, vol. 28, no. 9, pp. 1488–1497,
with the Gabor filter by delivering an average value of Sen, 2009.
Acc, and Sp of 0.748, 0.959, and 0.9801 on the DRIVE data [5] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and
set, respectively, and 0.759, 0.961, and 0.984 on the CHA- M. Goldbaum, “Detection of blood vessels in retinal images
SE_DB1 data set, respectively. Therefore, it is inferred that using two-dimensional matched filters,” IEEE Transactions on
adding different enhancement techniques before Gabor filter Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989.
boosts the performance of the traditional Gabor filter and [6] S. Roy, T. D. Whitehead, S. Li et al., “Co-clinical FDG-PET
also improves the accuracy, specifically with respect to the radiomic signature in predicting response to neoadjuvant
chemotherapy in triple-negative breast cancer,” European
tiny vessels.
Journal of Nuclear Medicine and Molecular Imaging, vol. 49,
Consequently, it is observed that though deep learning, a no. 2, pp. 550–562, Jan 2022.
supervised approach is actively implemented for blood [7] A. D. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating
vessel extraction in recent research and achieving better blood vessels in retinal images by piecewise threshold probing
results; still, the unsupervised traditional methods can be of a matched filter response,” IEEE Transactions on Medical
enhanced to achieve precise vessel segmentation. Also, the Imaging, vol. 19, no. 3, pp. 203–210, 2000.
results of the suggested approach that is an unsupervised [8] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and
approach outperform many state-of-the-art methods that B. Van Ginneken, “Ridge-based vessel segmentation in color
are coming under the group of unsupervised approaches. images of the retina,” IEEE Transactions on Medical Imaging,
Moreover, it will enable new practical applications, vol. 23, no. 4, pp. 501–509, 2004.
where analysis of low-contrast images in real time is re- [9] S. Roy and K. I. Shoghi, “Computer-aided tumor segmen-
tation from T2-weighted MR images of patients derived tu-
quired, for example, robotic microsurgery of the eye.
mor xenografts,” in Image Analysis and Recognition. ICAR
A drawback of the suggested model is that even though 2019, F. Karray, A. Campilho, and A. Yu, Eds., vol. 11663,
six enhancement techniques are combined with the Gabor Springer, Cham, 2019.
filter, but only one integrated model is able to perform better [10] X. Xiaoyi Jiang and D. Mojon, “Adaptive local thresholding by
as compared to the other integrated models. This is because verification-based multithreshold probing with application to
enhancement of certain features might be accompanied by vessel detection in retinal images,” IEEE Transactions on
Computational Intelligence and Neuroscience 19
Pattern Analysis and Machine Intelligence, vol. 25, no. 1, and Clifford convolution,” Multimedia Tools and Applica-
pp. 131–137, 2003. tions, vol. 78, no. 24, pp. 34839–34865, Dec 2019.
[11] G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, [26] M. G. Cinsdikici and D. Aydın, “Detection of blood vessels in
“Trainable COSFIRE filters for vessel delineation with ap- ophthalmoscope images using MF/ant (matched filter/ant
plication to retinal images,” Medical Image Analysis, vol. 19, colony) algorithm,” Computer Methods and Programs in
no. 1, pp. 46–57, 2015. Biomedicine, vol. 96, no. 2, pp. 85–95, 2009.
[12] T. Mapayi, S. Viriri, and J.-R. Tapamo, “Comparative study of [27] B. Zhang, L. Zhang, L. Zhang, and F. Karray, “Retinal vessel
retinal vessel segmentation based on global thresholding extraction by matched filter with first-order derivative of
techniques,” Computational and Mathematical Methods in Gaussian,” Computers in Biology and Medicine, vol. 40, no. 4,
Medicine, vol. 2015, Article ID 895267, 15 pages, Feb 2015. pp. 438–445, 2010.
[13] N. Strisciuglio, G. Azzopardi, M. Vento, and N. Petkov, [28] S. Roy, D. Bhattacharyya, S. K. Bandyopadhyay, and
“Multiscale blood vessel delineation using B-COSFIRE fil- T.-H. Kim, “An iterative implementation of level set for
ters,” in Proceedings of the International Conference on precise segmentation of brain tissues and abnormality de-
Computer Analysis of Images and Patterns, pp. 300–312, tection from MR images,” IETE Journal of Research, vol. 63,
Valetta, Malta, September 2015. no. 6, pp. 769–783, Jun 2017.
[14] S. Roy, T. D. Whitehead, J. D. Quirk et al., “Optimal co- [29] M. Al-Rawi, M. Qutaishat, and M. Arrar, “An improved
clinical radiomics: sensitivity of radiomic features to tumour matched filter for blood vessel detection of digital retinal
volume, image noise and resolution in co-clinical T1-weighted images,” Computers in Biology and Medicine, vol. 37, no. 2,
and T2-weighted magnetic resonance imaging,” EBioMedi- pp. 262–267, 2007.
cine, vol. 59, Sep 2020. [30] M. Al-Rawi and H. Karajeh, “Genetic algorithm matched filter
[15] Y. Rajput, R. manza, M. Patwari, and N. Deshpande, optimization for automated detection of blood vessels from
M. Jalgaon, “Retinal blood vessels extraction using 2D median digital retinal images,” Computer Methods and Programs in
filter,” in Proceedings of the Third National Conference on Biomedicine, vol. 87, no. 3, pp. 248–253, 2007.
Advances in Computing, pp. 58-59, Chennai, Tamil Nadu, [31] K. S. Sreejini and V. K. Govindan, “Improved multiscale
India, March 2013. matched filter for retina vessel segmentation using PSO al-
[16] Maison, T. Lestari, and A. Luthfi, “Retinal blood vessel seg- gorithm,” Egyptian Informatics Journal, vol. 16, no. 3,
pp. 253–260, 2015.
mentation using Gaussian filter,” Journal of Physics: Confer-
[32] H. P. Chaudhari, A. D. Rahulkar, and C. Y. Patil, “Seg-
ence Series, vol. 1376, no. 1, pp. 012023–012028, 2019.
mentation of retinal vessels by the use of gabor wavelet and
[17] M. Malarvel and S. R. Nayak, “Edge and region segmentation
linear mean squared error classifier,” Int. .J Emerg. Res. Tech.,
in high-resolution aerial images using improved kernel
vol. 2, pp. 119–125, 2014.
density estimation: a hybrid approach,” Journal of Intelligent
[33] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar Jr, H. F. Jelinek,
and Fuzzy Systems, vol. 39, no. 1, pp. 543–560, 1 Jan.2020.
and M. J. Cree, “Retinal vessel segmentation using the 2-D
[18] J. Kaur and H. P. Sinha, “Automated detection of retinal blood
Gabor wavelet and supervised classification,” IEEE Transac-
vessels in diabetic retinopathy using Gabor filter,” Int. J of
tions on Medical Imaging, vol. 25, no. 9, pp. 1214–1222, 2006.
Comp Sc and Net., vol. 4, pp. 109–116, 2012.
[34] S. Shabbir, A. Tariq, and M. U. Akram, “A Comparison and
[19] X.-R. Bao, X. Ge, L.-H. She, and S. Zhang, “Segmentation of
evaluation of computerized methods for blood vessel en-
retinal blood vessels based on cake filter,” BioMed Research hancement and segmentation in retinal images,” International
International, vol. 2015, Article ID 137024, 11 pages, 2015. Journal of Future Computer and Communication, vol. 2,
[20] S. Chatterjee, R. K. Dutta, D. Ganguly, K. Chatterjee, and pp. 600–603, 2013.
S. Roy, U. Tiwary and S. Chaudhury, “Bengali Handwritten [35] H. Aguirre-Ramos, J. G. Avina-Cervantes, I. Cruz-Aceves,
Character Classification Using Transfer Learning on Deep J. Ruiz-Pinales, and S. Ledesma, “Blood vessel segmentation in
Convolutional Network,” in Intelligent Human Computer retinal fundus images using Gabor filters, fractional deriva-
Interaction. IHCI 2019, vol. 11886, Springer, Cham, 2019. tives, and Expectation Maximization,” Applied Mathematics
[21] B. Kochner, D. Schuhmann, M. Michaelis, G. Mann, and and Computation, vol. 339, pp. 568–587, 2018.
K.-H. Englmeier, “Course tracking and contour extraction of [36] Z. Yavuz and C. Köse, “Blood vessel extraction in color retinal
retinal vessels from color fundus photographs: most efficient fundus images with enhancement filtering and unsupervised
use of steerable filters for model-based image analysis,” in classification,” Journal of Healthcare Engineering, vol. 2017,
Proceedings of the Medical Imaging 1998: Image Processing, Article ID 4897258, 12 pages, 2017.
pp. 755–761, San Diego, CA, United States, 24 June 1998. [37] F. Farokhian, C. Yang, H. Demirel, S. Wu, and I. Beheshti,
[22] F. Sabaz and U. Atila, “ROI detection and vessel segmentation “Automatic parameters selection of Gabor filters with the
in retinal image,” The International Archives of the Photo- imperialism competitive algorithm with application to retinal
grammetry, Remote Sensing and Spatial Information Sciences, vessel segmentation,” Biocybernetics and Biomedical Engi-
vol. XLII-4/W6, pp. 85–89, 2017. neering, vol. 37, no. 1, pp. 246–254, 2017.
[23] S. Dash and G. Sahu, “Retinal blood vessel segmentation by [38] R. Sundaram, R. Ks, P. Jayaraman, and V. B, “Extraction of
employing various upgraded median filters,” in Proceedings of blood vessels in fundus images of retina through hybrid
the IEEE International Conference on Intelligent Systems and segmentation approach,” Mathematics, vol. 7, no. 2,
Green Technology, pp. 35–39, Visakhapatnam, AP, June 2019. pp. 169–217, 2019.
[24] C. Yao and H.-j. Chen, “Automated retinal blood vessels [39] S. Dash, S. Verma, Kavita et al., “A hybrid method to enhance
segmentation based on simplified PCNN and fast 2D-otsu thick and thin vessels for blood vessel segmentation,” Diag-
algorithm,” Journal of Central South University of Technology, nostics, vol. 11, no. 11, p. 2017, 2021.
vol. 16, no. 4, pp. 640–646, 2009. [40] Y. Dagli, S. Choksi, and S. Roy, “Prediction of two year
[25] S. Roy, A. Mitra, S. Roy, and S. K. Setua, “Blood vessel survival among patients of non-small cell lung cancer,” in
segmentation of retinal image using Clifford matched filter Computer Aided Intervention and Diagnostics in Clinical and
20 Computational Intelligence and Neuroscience
Medical Images, J. Peter, S. Fernandes, C. Eduardo Thomaz, in the Context of industry 4.0,” Sensors, vol. 21, no. 19, p. 6474,
and S. Viriri, Eds., vol. 31, pp. 169–177, Springer, Cham, 2021.
Switzerland, 2019. [57] S. Dash, U. R. Jena, and M. R. Senapati, “Homomorphic
[41] D. Primitivo, R. Alma, C. Erik et al., “A hybrid method for normalization-based descriptors for texture classification,”
blood vessel segmentation in images,” Biocybernetics and Arabian Journal for Science and Engineering, vol. 43, no. 8,
Biomedical Engineering, vol. 39, no. 3, pp. 814–824, 2019. pp. 4303–4313, 2018.
[42] M. Hashemzadeh and B. Adlpour Azar, “Retinal blood vessel [58] L. Gaur, G. Singh, A. Solanki et al., “Disposition of youth in
extraction employing effective image features and combina- predicting sustainable development goals using the neuro-
tion of supervised and unsupervised machine learning fuzzy and random forest algorithms,” Hum. Cent. Comput.
methods,” Artificial Intelligence in Medicine, vol. 95, pp. 1–15, and Inf. Sci., vol. 11, pp. 1–19, 2021.
2019. [59] S. Agrawal, R. Panda, P. K. Mishro, and A. Abraham, “A novel
[43] V. Anand, S. Gupta, D. Koundal, S. R. Nayak, P. Barsocchi, joint histogram equalization based image contrast enhance-
and A. K. Bhoi, “Modified U-net architecture for segmen- ment,” Journal of King Saud University - Computer and In-
tation of skin lesion,” Sensors, vol. 22, no. 3, p. 867, Jan 2022. formation Sciences, vol. 34, no. 4, pp. 1172–1182, 2022.
[44] A. Khawaja, T. M. Khan, K. Naveed, S. S. Naqvi, [60] M. Kaur, S. Verma, and Kavita, “Flying ad-hoc network
N. U. Rehman, and S. Junaid Nawaz, “An improved retinal (FANET): challenges and routing protocols,” Journal of
vessel segmentation framework using frangi filter coupled Computational and Theoretical Nanoscience, vol. 17, no. 6,
with the probabilistic patch based denoiser,” IEEE Access, pp. 2575–2581, 2020.
vol. 7, pp. 164344–164361, 2019. [61] S. C. F. Lin, C. Y. Wong, G. Jiang et al., “Intensity and edge
[45] B. Wang, S. Wang, S. Qiu, W. Wei, H. Wang, and H. He, based adaptive unsharp masking filter for color image en-
“CSU-net: a Context spatial U-net for accurate blood vessel hancement,” Optik, vol. 127, no. 1, pp. 407–414, 2016.
segmentation in fundus images,” IEEE Journal of Biomedical [62] T. Sharma, S. Verma, and Kavita, “Prediction of heart disease
and Health Informatics, vol. 25, no. 4, pp. 1128–1138, 2021. using Cleveland dataset: a machine learning approach,” Int.
[46] C. Chen, J. H. Chuah, R. Ali, and Y. Wang, “Retinal vessel J. Rec. Res. Asp., vol. 4, pp. 17–21, 2017.
segmentation using deep learning: a review,” IEEE Access, [63] N. Kwok and H. Shi, “Design of unsharp masking filter kernel
vol. 9, pp. 111985–112004, 2021. and gain using Particle Swarm Optimization,” in Proceedings
[47] H. Kriplani, B. Patel, and S. Roy, “Prediction of chronic kidney of the International Congress on Image and Signal Processing,
diseases using deep artificial neural network technique,” in pp. 217–222, Dalian, China, October 2014.
Computer Aided Intervention and Diagnostics in Clinical and [64] G. Ghosh, fnm Kavita, D. Anand et al., “Secure surveillance
Medical Images, J. Peter, S. Fernandes, C. Eduardo Thomaz, systems using partial-regeneration-based non-dominated
and S. Viriri, Eds., vol. 31, pp. 179–187, Springer, Cham, 2019. optimization and 5D-chaotic map,” Symmetry, vol. 13, no. 8,
[48] P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, p. 1447, 2021.
and J. J. Kang, “Classification of skin disease using deep [65] D. Hasler and S. E. Suesstrunk, “Measuring colorfulness in
learning neural networks with MobileNet V2 and LSTM,” natural images,” SPIE Proceedings, vol. 5007, pp. 87–95, 2003.
Sensors, vol. 21, no. 8, p. 2852, 2021.
[49] M. F. Ijaz, M. Attique, and Y. Son, “Data-driven cervical
cancer prediction model with outlier detection and over-
sampling methods,” Sensors, vol. 20, no. 10, p. 2809, 2020.
[50] M. Malarvel and S. R. Nayak, “Region grow using fuzzy
automated seed selection for weld defect segmentation in
x-radiography image,” in Proceedings of the International
Conference on Artificial Intelligence in Manufacturing & Re-
newable Energy, SSRN, Elsevier, Bhubaneswar, India, 2019.
[51] P. Naga Srinivasu, S. Ahmed, A. Alhumam, A. Bhoi Kumar,
and M. Fazal Ijaz, “An AW-HARIS based automated seg-
mentation of human liver using CT images,” Computers,
Materials & Continua, vol. 69, no. 3, pp. 3303–3319, 2021.
[52] A. Vulli, P. N. Srinivasu, M. S. K. Sashank, J. Shafi, J. Choi, and
M. F. Ijaz, “Fine-tuned DenseNet-169 for breast cancer
metastasis prediction using FastAI and 1-cycle policy,” Sen-
sors, vol. 22, no. 8, p. 2988, 2022.
[53] I. Rizwan I Haque and J. Neubert, “Deep learning approaches
to biomedical image segmentation,” Informatics in Medicine
Unlocked, vol. 18, 2020.
[54] M. Sood, S. Verma, V. K. Panchal, and fnm Kavita, “Optimal
path planning using swarm intelligence based hybrid tech-
niques,” Journal of Computational and Theoretical Nano-
science, vol. 16, no. 9, pp. 3717–3727, 2019.
[55] S.-C. Huang, F.-C. Cheng, and Y.-S. Chiu, “Efficient contrast
enhancement using adaptive Gamma correction with
weighting distribution,” IEEE Transactions on Image Pro-
cessing, vol. 22, no. 3, pp. 1032–1041, 2013.
[56] S. Rani, D. Koundal, fnm Kavita, M. F. Ijaz, M. Elhoseny, and
M. I. Alghamdi, “An optimized framework for WSN routing