0% found this document useful (0 votes)
9 views

Four

Uploaded by

Jayant Bhardwaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Four

Uploaded by

Jayant Bhardwaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Medical Image Fusion Using Lifting

Wavelet and Fractional Bird Swarm


Optimization

Jayant Bhardwaj and Abhijit Nayak

Abstract This paper concentrates on an eminent technology of medical imaging


data science called image fusion. An unique pixel-level image fusion method is
presented here. The various multimodal MRI brain images are taken from BRATS
database and an effectively fused and a comparatively more informative image
called fused image is obtained. The two multimodal images are first decomposed
by Haar Wavelet to obtain high- and low-frequency coefficients. Intermediately
Bayesian fusion is performed on these extracted information-rich coefficients. Later
on proposed fusion rule is adopted which is basically an optimization technique
called fractional bird swarm optimization (Fractional-BSA). It is observed that the
proposed scheme called fractional BSA-Bayesian Fusion outperformed the other
contemporary existing methods of image fusions like wavelet and HW fusion, SWT
and NSCT and NSCT. The better results of assessment parameters like Mutual infor-
mation, peak signal to noise ratio (PSNR) and root mean square error (RMSE) are
proving the merit of proposed method.

Keyword Image fusion · Bayesian · Wavelet transform

1 Introduction

Every medical modality has some specific feature to be concerned. No single


modality is sufficient to deliver maximum of information and a potential deci-
sion about an ailment [1]. Therefore According to the representation point of view,
different image fusion methods are categorized in pixel level, feature level, and deci-
sion level [2]. An important pixel-level image fusion scheme is being proposed here
for brain magnetic resonance imaging (MRI) images of different modalities. The

J. Bhardwaj (B)
USICT & BPIT, Delhi, India
e-mail: [email protected]
A. Nayak
Bhagwan Parshuram Institute of Technology (BPIT), Delhi, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 277
F. Thakkar et al. (eds.), Proceedings of the International e-Conference on Intelligent
Systems and Signal Processing, Advances in Intelligent Systems and Computing 1370,
https://doi.org/10.1007/978-981-16-2123-9_21
278 J. Bhardwaj and A. Nayak

important modalities of magnetic resonance imaging (MRI) used here are Flair, T1,
T1C, and T2 [3]. Every modality has some specific direction of feature extraction
with unique property of diagnosis. For example, T1-MRI modality expresses anatom-
ical structure-based details of various tissues while T 2-MRI modality is providing
details of pathological tissues only [4]. From last two decades, several methods
for medical image fusion (IF) have been proposed. These methods are generally
classified into transform domain and spatial domain [5]. Due to ease in implemen-
tation and detail information preservation multiscale, transform techniques are quite
popular in recent years. Gradient pyramid, Laplacian pyramid, and various wavelet
transform methods are few examples of this. Few more transform techniques like
shearlet transform and non-sub-sampled contourlet transform (NSCT) are becoming
more popular these days [6, 7]. Advanced wavelet transforms like lifting (LWT)
and static wavelet transform (SWT) with the cascaded form are also enhancing the
technique of fusion greatly [2]. Various anomalies around singularities like pseudo-
Gibbs noise should be removed by a fusion method, and this problem is removed at
an extent by some high generation wavelet like lifting wavelet transform (LWT). In
this paper, we are proposing a pixel-level fusion method of optimization employing
very popular Bayesian scheme of signal processing [8]. The Bayesian parameters
are optimally controlled by an improve optimization scheme called fractional bird
swarm optimization (BSA).

2 Lifting Wavelet Transform

Lifting waveletis a second-generation wavelet used here for image decomposition


[10, 13]. It is basically a discrete wavelet transform (DWT) which can be viewed
as a predictor-error decomposition. The “predictors” are the scaling coefficients at a
given scale (j) that are used for the analysis of data at the next higher resolution or
scale (j-1). The wavelet coefficients thus generated are prediction errors [4]. LWT is
factoring the wavelet transforms into some steps called Lifting [9]. The LWT process
an image in the iteration of the three steps called lazy, predict and update. In Lazy
wavelet transform step, the image x[n] is explained in Eq. (1)

x[n]R, (nZ) (1)

divided into its even and odd polyphase components

xe [n] and xo [n] (2)

respectively, where

xe [n] = x[2n] (3)


Medical Image Fusion Using Lifting Wavelet … 279

xo [n] = x[2n + 1] (4)

In the Predict step of LWT, the neighboring even coefficients as in Eq. (2) are used
to predict odd polyphase coefficient, and the detail wavelet coefficients (high pass)
are generated as an error generated in the process of predicting the odd samples from
the even samples as in Eqs. 3 to 5.

d = xo − P (xe ) (5)

Now the odd components can be calculated as

xo = P (xe ) + d (6)

In the last update step of LWT, the even set is updated here using the wavelet
coefficients to compute the scaling function coefficients (low pass) as in Eq. (6). An
operator U called update operator is applied to detail coefficients obtained in the
predicting step as in given Eq. (7).

S = xe + U(d) (7)

It reproduces x e as in Eq. (8).

xe = s − U(d) (8)

LWT is a fast implementation of Wavelet Transform because of its almost similar


nature of operation on low pass and high pass filtering [10]. Perfect reconstruction
of original image is it’s another attracting property [17, 18]. It also saves lots of
auxiliary memory and have an advantage of simple inverse transform property [18].
It also reduces computational complexity by a factor of two. A separable wavelet
transform is implemented on images by first applying 1-D wavelet transform along
the columns and then along the rows of an image. This provides 1-level wavelet
decomposition that consists of four components labeled as LL, LH, HL, and HH,
respectively [6, 7, 9, 10].

3 Framework of Proposed Method

The Bayesian fusion approach uses the sub-images corresponding to the source
images. I1 (x, y) and I2 (x, y) are employed for acquiring the fused image Q , and
the fusion is performed at the pixel levels such that the original details of the source
images are used for fusion (Fig. 1).
The fused image provides the significant information for clinical diagnosis. The
individual pixels in the sub-images of I1 (x, y) and I2 (x, y) are fused for which
280 J. Bhardwaj and A. Nayak

Fig. 1 Framework of Fractional-BSA-Bayesian Fusion

the Bayesian parameter. γ is employed which is determined optimally using the


proposed algorithm. The fusion of the low sub-band corresponding to I1 (x, y) with
low sub-band of I2 (x, y) is done using the Bayesian parameter γ and at the same
time, the high-level bands belonging to I1 (x, y) and I2 (x, y) are fused as follows.
The sub-bands for generating the fused image are represented as Eqs. (9) to (12).

Q L L (x, y) = γ I1L L (x, y) + (1 − γ ) I2L L (x, y) (9)

Q L H (x, y) = γ I1L H (x, y) + (1 − γ ) I2L H (x, y) (10)

Q H L (x, y) = γ I1H L (x, y) + (1 − γ ) I2H L (x, y) (11)

Q H H (x, y) = γ I1H H (x, y) + (1 − γ ) I2H H (x, y) (12)

where γ represents the Bayesian factor. Q L L (x, y), Q L H (x, y), Q H L (x, y)¸and
Q H H (x, y) are the LL, LH, HL, and HH band of the fused medical image [3]. The
Bayesian factor is tuned optimally using the proposed Fractional-BSA in such a way
that the value of the factor renders the effective fusion in order to acquire good quality
[9].
Medical Image Fusion Using Lifting Wavelet … 281

3.1 Optimizing the Bayesian Approach Using


the Fractional-BSA

The ultimate aim of the proposed Fractional-BSA-based Bayesian Fusion approach


is to determine the optimal value of the Bayesian parameter, which is computed,
based on the characteristics of the input images. Initially, the Bayesian model is
developed using the pixels in the image and using the model, the class value is set,
which is selected optimally using the optimization algorithm [1–8]. Let us have a
deep insight over the Bayesian approach. Once the sub-images are fused using the
Bayesian approach, the Bayesian model is developed for all the four sub-images for
which all the pixels in the sub-images are interpreted. The dimension of the training
model is denoted as, [K × (c × 8)]. The solution model is represented as

M = {M1 , M2 , ..., Mk , ..., Mv } (13)

where v indicates the total solution models and M stands for the Bayesian model
consisting of v number of models and v possess the dimension [(B × B)].[(B × B)]
refers to the total number of pixels in the image, which is given as K = [B × B]. Let
Mk be the k th solution model, which corresponds to a pixel with respect to a class.
The data is established pixel-wise for all the four sub-images and the training model
is developed by finding the mean and variance for individual pixel of the sub-images
with respect to the given c number of classes [11]. Thus, the dimension of a solution
vector or the dimension of the training model for the individual pixel is given as
[1 × (c × 8)]. Therefore, it is clear that the role of the optimization is to derive the
optimal solution in such a way to enable the optimal fusion of the medical image. The
Bayesian factor is determined using fractional-BSA, which is referred to as the class
label in the Bayesian model. The proposed fractional-BSA is the modification of the
standard BSA with the fractional concept, and the advantage of the proposed method
is that the computational overhead associated with the standard BSA is overcome
through the integration of the fractional concept in such a way that solutions of
previous iterations are interpreted for updating the position in the new iteration. The
accuracy and stability of the proposed algorithm is better compared with the existing
algorithms and in addition, the convergence time and accuracy is balanced among
each other with a higher performance. Moreover, the proposed algorithm is capable
of adjusting with the different search methodologies and there is a perfect balance
in the exploration and exploitation phases. Additionally, the diversity of the method
is better and avoids the prematurity of the solutions.
282 J. Bhardwaj and A. Nayak

3.1.1 Solution Encoding

In this step, a better representation of the solution will be determined using the
proposed scheme that is Fractional-BSA. The solution vector with the dimension
[1 × (c × 8)] is having mean and variance of the individual pixel in the image with
respect to the class labels. The class label for pixels will be derived using Bayesian
approach.

3.1.2 Bayesian Objective Function

The fitness of the solution is computed for deriving the optimal solution and the
fitness is evaluated based on the mutual information, which should be maximal for
the best solution. The objective function based on mutual information is determined
using the following formula as in Eq. (14),

f = h (I1 ) + h (I2 ) − h (I1 , I2 ) (14)

where h (I1 ) and h (I2 ) are the individual entropies of the image I1 and I2 , respec-
tively, and h (I1 , I2 ) specify the joint entropy of the image. The joint entropy is
computed as in Eq. (15),

h (I1 , I2 ) = − ρ I1 , I2 (x, y) log ρ I1 , I2 (x, y) (15)
x, y

It is interesting to note that the joint entropy between the images reduce due
to the one-to-one mapping pixels in I1 and the corresponding counterparts in I2 .
However, joint entropy decreases when the statistical relationship between the images
minimizes. Therefore, the objective function aims at the maximal mutual information
for the selection of the optimal solution.

3.1.3 Proposed Fractional-BSA Algorithm

The proposed fractional-BSA is based on the social interactions and behaviors


of birds, which includes the foraging, vigilance, and flight behaviors. The social
behaviors of the birds are deliberated below: The first condition is regarding the
stochastic decision for switching between the vigilance and the foraging behavior
[8]. Yet another interesting phenomenon is regarding the foraging behavior such that
during the foraging mechanism, the bird keeps in record the previous best experi-
ence acquired along with the best experience of the swarm in order to search their
food in the present [14]. Likewise, the information is carried to the entire swarm and
speaking about the vigilance, the bird moves to the center of the swarm, which could
be affected by the interference occurring during the competition in the swarm [7,
Medical Image Fusion Using Lifting Wavelet … 283

8]. The most significant mechanism is birds shift between the places during produc-
tion and scrounging and the birds with highest reserve, lowest reserve, and others are
referred to as producer, scrounger, and the rest of the birds choose randomly between
producing and scrounging [15]. The producers play a major role in searching food,
whereas the scroungers follow the producers for their food. The foraging behavior
of the birds is modeled as follows. Foraging behavior: As per the social behavior of
bird, the best experience in the past for the individual bird and the swarm is recalled
for update in the foraging behavior [16]. Thus, the foraging behavior is given as in
Eq. (16),
   
Aτb,+1 τ τ τ τ
d = Ab, d + Pb, d − Ab, d × p1 × r and (0, 1) + G d − Ab, d × p2 × r and (0, 1) (16)

where b refers to the total birds in the population and it is given as b ∈ 1, . . . , m


and m stands for the total number of birds in the population. The dimensional space
is denoted as d, which is given as d ∈ 1, . . . , S and τ specifies the time-step. The
previous best position of the bth bird in d th dimension is notated as Pb,τ d and the
previous best position of the bird swarm is denoted as G d . The position of the bth
bird in d th dimension is notated as Aτb, d , r and (0, 1) specifies the independent
members distributed in (0, 1) with p1 and p2 being the positive numbers, presenting
the cognitive as well as social accelerated parameters. Whenever the uniform random
number (0, 1) lies below the constant const(0, 1), the bird continues foraging for
discovering the food or else the birds continue with the vigilance behavior in the
swarm [7]. The vigilance behavior of the birds is modeled as Eq. (17),
   
Aτb,+1 τ τ τ
d = Ab, d + J1 μ d − Ab, d × r and (0, 1) + J2 Pid − Ab, d × r and (−1, 1) (17)

The J1 and J2 are further described as per Eqs. (18) and (19)
 p 
F
J1 = r1 × exp −  b × m ; i = b; 1 ≤ i ≤ m (18)
F+B
 p p p
Fb − Fi n × Fi
J2 = r2 × exp  (19)
p p
Fi − Fib + β F +β
284 J. Bhardwaj and A. Nayak

p 
where Fb refers to the value of best fitness of i th bird, F refers to the sum of
the swarm’s best fitness measure, β signifies the smallest constant, and μd is the
mean of d th element of average position of whole swarm. Let r1 and r2 are the
positive constants varying between 0 and 2 and m specify the total birds in the
population. During the vigilance, the bird moves to the center in such a way that
J1 and r and (0, 1) is not more than one and at the same time, the direct impact of
the interference is measured with J2 . Whenever the fitness of the random i th bird is
better than the bth bird, the J2 is said to be greater than r2 , which implies that the
bird suffers from interference and more specifically, bth bird faces more interference
compared with bth bird. The above Eq. (19) describes the standard equation of the
BSA in the vigilance behavior, which is modified using the fractional theory in order
to recall the behavior of the birds in the previous iterations such that the optimization
becomes effective in deciding the Bayesian parameter. The vigilance mechanism of
the birds in Fractional-BSA is modeled as Eqs. (20) and (21),
   
τ +1 τ τ τ
Ab, d − Ab, d = +J1 μ d − Ab, d × r and (0, 1) + J2 Pid − Ab, d × r and (−1, 1) (20)

    
∂ α Aτb,+1 τ τ
d = J1 μ d − Ab, d × r and (0, 1) + J2 Pid − Ab, d × r and (−1, 1)
(21)


where ∂ α Aτb,+1
d is the fractional term and is defined based on the fractional theory
in [13]. The inclusion of the fractional concept adds the historical solutions in the
update equation so that the position update occurs effectively with better conver-
gence time. This section displays the results of the proposed Fractional-BSA-based
Bayesian Fusion approach and the comparative analysis reveals the effectiveness of
the proposed image fusion as shown in Figs. 2 and 3 with Tables 1 and 2.
The experimentation is performed in MATLAB with the BRATS database that
is BRATS 2015 [10–12]. The multimodal scans provide four modalities, T1, T1C,

Fig. 2 Data set 1 multimodal images a Flair b T1 c T2 d T1C

Fig. 3 Data set 2 multimodal images a Flair b T1 c T2 d T1C


Medical Image Fusion Using Lifting Wavelet … 285

Table 1 Comparative analysis using dataset-1


Metrics Wavelet + HW Fusion SWT + NSCT NSCT Proposed fractional
BSA-bayesian fusion
Mutual information 1.4673 1 4669 1.4299 1.5765
PSNR (in ffi) 37.8289 37.8848 36.8904 44.0957
RMSE 6.5101 6.5281 6.5409 5.4940

Table 2 Comparative analysis using dataset-2


Metrics Wavelet + HW Fusion SWT + NSCT NSCT Proposed fractional-
BSA-bayesian fusion
Mutual information 1.4612 1.4550 1.4508 1.4960
PSNR (in dB) 32.991 33.470 32.498 35.541
RMSE 10.0053 10.0510 9.9689 9.6404

Flair, and T2 and these are determined using various clinical protocols and scanners.
BRATS-2015 [10–12] is the dataset with about 300 low- and high-grade glioma
tumors with cystic and Gd-enhanced core tumor regions [11].
Now consider the following Fig. 4a, b, c for data set 1 image fusion. Fig. 5a, b,
c describes the performance for BRATS Data set-2 images.
Now we can consider the following Fig. 6a, b, c as a comparison of proposed
methodology performance with other methodologies like wavelet plus Helo-whale
fusion, SWT plus NSCT, and with only NSCT for Data set 1 images. Similarly,
Fig. 7a, b, c are for data set-2 images (Fig. 7).
286 J. Bhardwaj and A. Nayak

Fig. 4 Performance analysis


using the dataset-1, a Mutual
Information, b PSNR,
c RMSE
Medical Image Fusion Using Lifting Wavelet … 287

Fig. 5 Performance analysis


using the dataset-2, a Mutual
Information, b PSNR,
c RMSE
288 J. Bhardwaj and A. Nayak

Fig. 6 Comparative analysis


using the dataset-1, a MI
b PSNR c RMSE
Medical Image Fusion Using Lifting Wavelet … 289

Fig. 7 Comparative analysis


using the dataset-2, a Mutual
Information, b PSNR,
c RMSE
290 J. Bhardwaj and A. Nayak

References

1. J. Bhardwaj, A. Nayak, A discrete wavelet transform and bird swarm optimized Bayesian
multimodal medical image fusion. Helix 10(1), 07–12 (2020)
2. J. Bhardwaj, A. Nayak, Haar wavelet transform–based optimal Bayesian method formedical
image fusion. Med. Biol. Eng. Comput. 58, 2397–2411 (2020)
3. D. Ebenezer, J. Anithaa, K.K. Kamaleshwaranb, I. Rani, Optimum spectrum mask based
medical image fusion using Gray Wolf Optimization. Biomed. Signal Process. Control 34,
36–43 (2017)
4. S. Majumdar, J. Bharadwaj, Feature level fusion of multimodal images using Haar lifting
wavelet transform. World Acad. Sci. Eng. Technol. Int. J. Comput. Inform. Eng. 8(6), 1023–
1027 (2014)
5. S.P. Yadav, S. Yadav, Fusion of medical images in wavelet domain: a hybrid implementation.
Comput. Model. Eng. Sci. 122(1), :303–321 (2020)
6. J. Bhardwaj, A. Nayak, Cascaded lifting wavelet and contourlet framework based dual stage
fusion scheme for multimodal medical images. J. Electr. Electron. Syst. (2018)
7. J Bhardwaj, A. Nayak, D. Gambhir, Multimodal medical image fusion based on discrete wavelet
transform and genetic algorithm, in International Conference on Innovative Computing and
Communications, vol. 1165. (AISC, Springer, 2021), pp. 1047–1057
8. P. Chai, X.L.Z. Zhang, Image fusion using quaternion wavelet transform and multiple features.
IEEE Trans. Image Process. 17(4), 500–511 (2017)
9. J. Bhardwaj, A. Nayak, Lifting wavelet transform based ultrasound image fusion scheme, in
Imaging and Applied Optics 2018, OSA Technical Digest (2018)
10. Jayant Bhardwaj, Abhijit Nayak, Kulvinder Singh: Feature Level Fusion of Gray Dentistry
Imagesusing Haar Lifting Wavelet Transform, Imaging and Applied Optics 2017, Optical
Society of America (2017).
11. X.B. Mengab, X.Z. Gaoc, L. Lu, Y. Liu, H. Zhanga, Anewbio-inspired optimisation algorithm:
Bird Swarm Algorithm. J. Exp. Theor. Artif. Intell. (2016)
12. Multimodal Brain Images. https://www.med.upenn.edu/sbia/brats2018/data.html. Accessed
Sept (2019)
13. .B.H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, The multimodal
brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10),
1993–2024 (2015)
14. F. Nian, W. Li, X. Sun, M. Li, An improved particle swarm optimization application to
independent component analysis, in ICIECS 2009 (2009) pp. 1–4
15. J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in IEEE Swarm
Intelligence Symposium (Pasadena, CA, USA, 2005), pp. 124–129
16. J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm opti-
mizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3),
281–295 (2006)

You might also like