Four
Four
1 Introduction
J. Bhardwaj (B)
USICT & BPIT, Delhi, India
e-mail: [email protected]
A. Nayak
Bhagwan Parshuram Institute of Technology (BPIT), Delhi, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 277
F. Thakkar et al. (eds.), Proceedings of the International e-Conference on Intelligent
Systems and Signal Processing, Advances in Intelligent Systems and Computing 1370,
https://doi.org/10.1007/978-981-16-2123-9_21
278 J. Bhardwaj and A. Nayak
important modalities of magnetic resonance imaging (MRI) used here are Flair, T1,
T1C, and T2 [3]. Every modality has some specific direction of feature extraction
with unique property of diagnosis. For example, T1-MRI modality expresses anatom-
ical structure-based details of various tissues while T 2-MRI modality is providing
details of pathological tissues only [4]. From last two decades, several methods
for medical image fusion (IF) have been proposed. These methods are generally
classified into transform domain and spatial domain [5]. Due to ease in implemen-
tation and detail information preservation multiscale, transform techniques are quite
popular in recent years. Gradient pyramid, Laplacian pyramid, and various wavelet
transform methods are few examples of this. Few more transform techniques like
shearlet transform and non-sub-sampled contourlet transform (NSCT) are becoming
more popular these days [6, 7]. Advanced wavelet transforms like lifting (LWT)
and static wavelet transform (SWT) with the cascaded form are also enhancing the
technique of fusion greatly [2]. Various anomalies around singularities like pseudo-
Gibbs noise should be removed by a fusion method, and this problem is removed at
an extent by some high generation wavelet like lifting wavelet transform (LWT). In
this paper, we are proposing a pixel-level fusion method of optimization employing
very popular Bayesian scheme of signal processing [8]. The Bayesian parameters
are optimally controlled by an improve optimization scheme called fractional bird
swarm optimization (BSA).
respectively, where
In the Predict step of LWT, the neighboring even coefficients as in Eq. (2) are used
to predict odd polyphase coefficient, and the detail wavelet coefficients (high pass)
are generated as an error generated in the process of predicting the odd samples from
the even samples as in Eqs. 3 to 5.
d = xo − P (xe ) (5)
xo = P (xe ) + d (6)
In the last update step of LWT, the even set is updated here using the wavelet
coefficients to compute the scaling function coefficients (low pass) as in Eq. (6). An
operator U called update operator is applied to detail coefficients obtained in the
predicting step as in given Eq. (7).
S = xe + U(d) (7)
xe = s − U(d) (8)
The Bayesian fusion approach uses the sub-images corresponding to the source
images. I1 (x, y) and I2 (x, y) are employed for acquiring the fused image Q , and
the fusion is performed at the pixel levels such that the original details of the source
images are used for fusion (Fig. 1).
The fused image provides the significant information for clinical diagnosis. The
individual pixels in the sub-images of I1 (x, y) and I2 (x, y) are fused for which
280 J. Bhardwaj and A. Nayak
where γ represents the Bayesian factor. Q L L (x, y), Q L H (x, y), Q H L (x, y)¸and
Q H H (x, y) are the LL, LH, HL, and HH band of the fused medical image [3]. The
Bayesian factor is tuned optimally using the proposed Fractional-BSA in such a way
that the value of the factor renders the effective fusion in order to acquire good quality
[9].
Medical Image Fusion Using Lifting Wavelet … 281
where v indicates the total solution models and M stands for the Bayesian model
consisting of v number of models and v possess the dimension [(B × B)].[(B × B)]
refers to the total number of pixels in the image, which is given as K = [B × B]. Let
Mk be the k th solution model, which corresponds to a pixel with respect to a class.
The data is established pixel-wise for all the four sub-images and the training model
is developed by finding the mean and variance for individual pixel of the sub-images
with respect to the given c number of classes [11]. Thus, the dimension of a solution
vector or the dimension of the training model for the individual pixel is given as
[1 × (c × 8)]. Therefore, it is clear that the role of the optimization is to derive the
optimal solution in such a way to enable the optimal fusion of the medical image. The
Bayesian factor is determined using fractional-BSA, which is referred to as the class
label in the Bayesian model. The proposed fractional-BSA is the modification of the
standard BSA with the fractional concept, and the advantage of the proposed method
is that the computational overhead associated with the standard BSA is overcome
through the integration of the fractional concept in such a way that solutions of
previous iterations are interpreted for updating the position in the new iteration. The
accuracy and stability of the proposed algorithm is better compared with the existing
algorithms and in addition, the convergence time and accuracy is balanced among
each other with a higher performance. Moreover, the proposed algorithm is capable
of adjusting with the different search methodologies and there is a perfect balance
in the exploration and exploitation phases. Additionally, the diversity of the method
is better and avoids the prematurity of the solutions.
282 J. Bhardwaj and A. Nayak
In this step, a better representation of the solution will be determined using the
proposed scheme that is Fractional-BSA. The solution vector with the dimension
[1 × (c × 8)] is having mean and variance of the individual pixel in the image with
respect to the class labels. The class label for pixels will be derived using Bayesian
approach.
The fitness of the solution is computed for deriving the optimal solution and the
fitness is evaluated based on the mutual information, which should be maximal for
the best solution. The objective function based on mutual information is determined
using the following formula as in Eq. (14),
where h (I1 ) and h (I2 ) are the individual entropies of the image I1 and I2 , respec-
tively, and h (I1 , I2 ) specify the joint entropy of the image. The joint entropy is
computed as in Eq. (15),
h (I1 , I2 ) = − ρ I1 , I2 (x, y) log ρ I1 , I2 (x, y) (15)
x, y
It is interesting to note that the joint entropy between the images reduce due
to the one-to-one mapping pixels in I1 and the corresponding counterparts in I2 .
However, joint entropy decreases when the statistical relationship between the images
minimizes. Therefore, the objective function aims at the maximal mutual information
for the selection of the optimal solution.
8]. The most significant mechanism is birds shift between the places during produc-
tion and scrounging and the birds with highest reserve, lowest reserve, and others are
referred to as producer, scrounger, and the rest of the birds choose randomly between
producing and scrounging [15]. The producers play a major role in searching food,
whereas the scroungers follow the producers for their food. The foraging behavior
of the birds is modeled as follows. Foraging behavior: As per the social behavior of
bird, the best experience in the past for the individual bird and the swarm is recalled
for update in the foraging behavior [16]. Thus, the foraging behavior is given as in
Eq. (16),
Aτb,+1 τ τ τ τ
d = Ab, d + Pb, d − Ab, d × p1 × r and (0, 1) + G d − Ab, d × p2 × r and (0, 1) (16)
The J1 and J2 are further described as per Eqs. (18) and (19)
p
F
J1 = r1 × exp − b × m ; i = b; 1 ≤ i ≤ m (18)
F+B
p p p
Fb − Fi n × Fi
J2 = r2 × exp (19)
p p
Fi − Fib + β F +β
284 J. Bhardwaj and A. Nayak
p
where Fb refers to the value of best fitness of i th bird, F refers to the sum of
the swarm’s best fitness measure, β signifies the smallest constant, and μd is the
mean of d th element of average position of whole swarm. Let r1 and r2 are the
positive constants varying between 0 and 2 and m specify the total birds in the
population. During the vigilance, the bird moves to the center in such a way that
J1 and r and (0, 1) is not more than one and at the same time, the direct impact of
the interference is measured with J2 . Whenever the fitness of the random i th bird is
better than the bth bird, the J2 is said to be greater than r2 , which implies that the
bird suffers from interference and more specifically, bth bird faces more interference
compared with bth bird. The above Eq. (19) describes the standard equation of the
BSA in the vigilance behavior, which is modified using the fractional theory in order
to recall the behavior of the birds in the previous iterations such that the optimization
becomes effective in deciding the Bayesian parameter. The vigilance mechanism of
the birds in Fractional-BSA is modeled as Eqs. (20) and (21),
τ +1 τ τ τ
Ab, d − Ab, d = +J1 μ d − Ab, d × r and (0, 1) + J2 Pid − Ab, d × r and (−1, 1) (20)
∂ α Aτb,+1 τ τ
d = J1 μ d − Ab, d × r and (0, 1) + J2 Pid − Ab, d × r and (−1, 1)
(21)
where ∂ α Aτb,+1
d is the fractional term and is defined based on the fractional theory
in [13]. The inclusion of the fractional concept adds the historical solutions in the
update equation so that the position update occurs effectively with better conver-
gence time. This section displays the results of the proposed Fractional-BSA-based
Bayesian Fusion approach and the comparative analysis reveals the effectiveness of
the proposed image fusion as shown in Figs. 2 and 3 with Tables 1 and 2.
The experimentation is performed in MATLAB with the BRATS database that
is BRATS 2015 [10–12]. The multimodal scans provide four modalities, T1, T1C,
Flair, and T2 and these are determined using various clinical protocols and scanners.
BRATS-2015 [10–12] is the dataset with about 300 low- and high-grade glioma
tumors with cystic and Gd-enhanced core tumor regions [11].
Now consider the following Fig. 4a, b, c for data set 1 image fusion. Fig. 5a, b,
c describes the performance for BRATS Data set-2 images.
Now we can consider the following Fig. 6a, b, c as a comparison of proposed
methodology performance with other methodologies like wavelet plus Helo-whale
fusion, SWT plus NSCT, and with only NSCT for Data set 1 images. Similarly,
Fig. 7a, b, c are for data set-2 images (Fig. 7).
286 J. Bhardwaj and A. Nayak
References
1. J. Bhardwaj, A. Nayak, A discrete wavelet transform and bird swarm optimized Bayesian
multimodal medical image fusion. Helix 10(1), 07–12 (2020)
2. J. Bhardwaj, A. Nayak, Haar wavelet transform–based optimal Bayesian method formedical
image fusion. Med. Biol. Eng. Comput. 58, 2397–2411 (2020)
3. D. Ebenezer, J. Anithaa, K.K. Kamaleshwaranb, I. Rani, Optimum spectrum mask based
medical image fusion using Gray Wolf Optimization. Biomed. Signal Process. Control 34,
36–43 (2017)
4. S. Majumdar, J. Bharadwaj, Feature level fusion of multimodal images using Haar lifting
wavelet transform. World Acad. Sci. Eng. Technol. Int. J. Comput. Inform. Eng. 8(6), 1023–
1027 (2014)
5. S.P. Yadav, S. Yadav, Fusion of medical images in wavelet domain: a hybrid implementation.
Comput. Model. Eng. Sci. 122(1), :303–321 (2020)
6. J. Bhardwaj, A. Nayak, Cascaded lifting wavelet and contourlet framework based dual stage
fusion scheme for multimodal medical images. J. Electr. Electron. Syst. (2018)
7. J Bhardwaj, A. Nayak, D. Gambhir, Multimodal medical image fusion based on discrete wavelet
transform and genetic algorithm, in International Conference on Innovative Computing and
Communications, vol. 1165. (AISC, Springer, 2021), pp. 1047–1057
8. P. Chai, X.L.Z. Zhang, Image fusion using quaternion wavelet transform and multiple features.
IEEE Trans. Image Process. 17(4), 500–511 (2017)
9. J. Bhardwaj, A. Nayak, Lifting wavelet transform based ultrasound image fusion scheme, in
Imaging and Applied Optics 2018, OSA Technical Digest (2018)
10. Jayant Bhardwaj, Abhijit Nayak, Kulvinder Singh: Feature Level Fusion of Gray Dentistry
Imagesusing Haar Lifting Wavelet Transform, Imaging and Applied Optics 2017, Optical
Society of America (2017).
11. X.B. Mengab, X.Z. Gaoc, L. Lu, Y. Liu, H. Zhanga, Anewbio-inspired optimisation algorithm:
Bird Swarm Algorithm. J. Exp. Theor. Artif. Intell. (2016)
12. Multimodal Brain Images. https://www.med.upenn.edu/sbia/brats2018/data.html. Accessed
Sept (2019)
13. .B.H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, The multimodal
brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10),
1993–2024 (2015)
14. F. Nian, W. Li, X. Sun, M. Li, An improved particle swarm optimization application to
independent component analysis, in ICIECS 2009 (2009) pp. 1–4
15. J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in IEEE Swarm
Intelligence Symposium (Pasadena, CA, USA, 2005), pp. 124–129
16. J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm opti-
mizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3),
281–295 (2006)