0% found this document useful (0 votes)
16 views

2 - Multi Modality Medical Image Fusion Technique Using Multi Objective Differential Evolution Based Deep Neural Networks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

2 - Multi Modality Medical Image Fusion Technique Using Multi Objective Differential Evolution Based Deep Neural Networks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Journal of Ambient Intelligence and Humanized Computing (2021) 12:2483–2493

https://doi.org/10.1007/s12652-020-02386-0

ORIGINAL RESEARCH

Multi‑modality medical image fusion technique using multi‑objective


differential evolution based deep neural networks
Manjit Kaur1,3 · Dilbag Singh2,3

Received: 28 August 2019 / Accepted: 22 July 2020 / Published online: 8 August 2020
© Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract
The advancements in automated diagnostic tools allow researchers to obtain more and more information from medical images.
Recently, to obtain more informative medical images, multi-modality images have been used. These images have significantly
more information as compared to traditional medical images. However, the construction of multi-modality images is not
an easy task. The proposed approach, initially, decomposes the image into sub-bands using a non-subsampled contourlet
transform (NSCT) domain. Thereafter, an extreme version of the Inception (Xception) is used for feature extraction of the
source images. The multi-objective differential evolution is used to select the optimal features. Thereafter, the coefficient of
determination and the energy loss based fusion functions are used to obtain the fused coefficients. Finally, the fused image
is computed by applying the inverse NSCT. Extensive experimental results show that the proposed approach outperforms
the competitive multi-modality image fusion approaches.

Keywords Fusion · Diagnosis · CNN · Multi-modality · Differential evolution

1 Introduction multi-modality biomedical images are desirable. A multi-


modality image can be obtained by using the efficient fusion
Biomedical images are extensively utilized for automated approach (Daniel et al. 2017). Thus, these images have more
diagnosis of various diseases such as COVID-19, pneumo- information as compared to classical images and also more
nia, tuberculosis, cancer, etc. Biomedical imaging systems helpful to diagnose various kind of diseases (Du et al. 2016).
play an efficient role to monitor and diagnose the internal Recently, many researchers have designed and imple-
body organs without utilizing any kind of surgery. Biomedi- mented fusion approaches to obtain efficient multi-modal-
cal images come up with various modalities to understand ity biomedical images (Ravi and Krishnan 2018; Hu et al.
the internal body organs (Du et al. 2016). These modali- 2020). However, many researchers have utilized the exist-
ties are positron emission tomography (PET), magnetic ing image fusion approaches. Therefore, the obtained multi-
resonance imaging (MRI), computerized tomography (CT), modality fused images may suffer from various issues such
X-ray, ultrasound, etc. (James and Dasarathy 2014). Every as gradient and texture distortion, especially for the infected
modality comes up with its own significance during its usage region. To overcome the issues associated with the existing
in the diagnosis process. Due to a single modality, these approaches, many researchers have utilized deep learning
images are limited to certain diseases or issues. Therefore, and dictionary learning kind of approaches. These are dic-
tionary learning (Hu et al. 2020), local-features fuzzy sets
(Ullah et al. 2020), deep learning (Algarni 2020; Xia et al.
* Dilbag Singh
[email protected] 2019; Zhou et al. 2019), deep learning and NSST (Wang
et al. 2019), etc. that found to be the best tool to obtain effi-
1
Department of Computer and Communication Engineering, cient multi-modality fused images.
Manipal University Jaipur, Jaipur, India The design and development of an efficient multi-modal-
2
Department of Computer Science and Engineering, Manipal ity biomedical image fusion approach is still an open area
University Jaipur, Jaipur, India for research. Deep learning is found to be one of the best
3
Computer Science Engineering, School of Engineering fusion approaches to obtain promising results. Additionally,
and Applied Sciences, Bennett University,
Greater Noida 201310, India

13
Vol.:(0123456789)
2484 M. Kaur, D. Singh

a deep transfer learning-based multi-modality biomedical images. The pyramid wavelet used for the fusion process.
fusion approach can provide better results. Ma et al. (2020) designed a dual-discriminator conditional
The main contributions are summarized as: generative adversarial network (DDcGAN) to obtain multi-
modality fused images. It obtained a real-like fused image
• A multi-objective differential evolution and Xception by using the content loss to dupe both discriminators. Two
model based multi-modality biomedical fusion model is discriminators were also considered intention to differenti-
proposed. ate the composition variations between the fused and source
• The proposed approach, initially, decomposes the image images, respectively.
into subbands using a non-subsampled contourlet trans- Wang et al. (2019) developed a 3D auto-context-based
form (NSCT). locality adaptive multi-modality GANs (3GANs) to obtain
• An extreme version of the Inception (Xception) is then more efficient multi-modality fused images. A non-unified
used for feature extraction of the source images. kernel was also used along with the adaptive approach
• The multi-objective differential evolution is used to select for multimodality fusion. Gai et al. (2019) utilized pulse
the optimal features. coupled neural network (PCNN) by considering the edge
• To obtain the fused coefficients, a coefficient of deter- preservation and enhanced sparse representation in the non-
mination and the energy loss based fusion functions are subsampled shearlet transform (NSST). It completely uti-
used. lized the features of various modalities that can handle edge
• Finally, a fused image is computed by applying the details well and improves the results.
inverse NSCT. Liu et al. (2020) utilized a VGG16 based deep transfer
• The proposed and the competitive approaches are com- learning approaches for image fusion to improve the clas-
pared by considering the benchmark multi-modality sification process. VGG16 can obtain more efficient features
image fusion dataset. as compared to the traditional deep learning approaches. The
obtained features were then used to fuse the multi-modality
The remaining structure of this paper is organized as: Exist- biomedical images. Tavard et al. (2014) designed a multi-
ing literature is presented in Sect. 2. The proposed medical modal registration and fusion approach to improve the car-
image fusion is illustrated in Sect. 3. Experimental results diac re-synchronization. The approach helps in improving
and comparative analyses are discussed in Sect. 4. Section 5 therapy optimization.
concludes the proposed work. Zhu et al. (2016) designed a novel dictionary learning-
based image fusion approach for multi-modality biomedical
images. Due to the use of dictionary learning, this approach
2 Literature review achieves higher accuracy. But it is computationally com-
plex in nature. Liu et al. (2018) proposed the biomedical
Shu et al. (Zhu et al. 2019) implemented local laplacian image decomposition approach using NSST to fuse the
energy and phase congruency based fusion approach in multi-modality images. This approach has shown signifi-
the NSCT domain (LEPN). Local laplacian energy utilized cant results over the existing approaches, but, suffer from
weighted local energy and sum of laplacian coefficients to the edge degradation issue. Wang et al. (2020) utilized CNN
obtain the regulated details and features of input images. and contrast pyramid to fuse the biomedical multi-modality
Zhu et al. (2020) designed a diffusion-based approach by images. CNN can fuse the images efficiently. However, it is
using the synchronized-anisotropic operators (DSA). A max- a computationally extensive approach, and some time also
imum absolute value constraint was also utilized for base not provide promising results if the biomedical images are
layers fusion. The fusion decision map was computed by very similar to each other.
considering the sum of the modified anisotropic Laplacian From the literature review, it has been observed that
approach by using the similar corrosion sub-bands obtained multi-modality image fusion is still an open area for
from the anisotropic diffusion. Kumar et al. (2020) proposed research. The deep learning is found to be the best promis-
a co-learning based fusion maps for obtained more efficient ing techniques to obtain better multi-modality fused bio-
multi-modality fused biomedical images. A convolutional medical images. But these approaches can provide better
neural network (CNN) was also used for the prediction and results if pre-trained deep transfer learning approaches are
segmentation of potential objects. used. Additionally, the initial parameter selection of the
Lu et al. (2014) designed an edge-guided dual-modality deep learning and deep transfer learning approach is also a
(EGDM) approach to obtain the multi-modality images. challenging issue (Pannu et al. 2019; Kaur and Singh 2019;
It performs significantly better even on highly under-sam- Pannu et al. 2018). Therefore, in this paper, a well-known
pled data. Lifeng et al. (2001) utilized wavelet and qual- multi-objective differential evolution is used to enhance the
ity analysis to obtain the biomedical multi-modality fused results.

13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2485

3 Proposed multi‑objective differential I. Initialization: First of all, various parameters related


evolution based deep transfer learning to differential evolution are defined such as population
model for multi‑modality image fusion size ( tp ), crossover rate ( cr ), mutation rate ( mr ), etc. Ran-
dom distribution is used to generate the random solutions
Assume that I1 and I2 show source images. Initially, NSCT 𝛽𝛼0 (𝛼 = 1, 2, … , tp ). h defines the number of function evalu-
is used to decompose I1 and I2 into sub-bands. Our primary ations. It is used to control the iterative process of differen-
objective is to fuse the respective sub-bands of both source tial optimization with maximum function evaluations ( hM ).
images. The fusion of high sub-band is achieved by using II. Iterative step Mutation and crossover operators are
the extreme version of the Inception (Xception) model. used to obtain the optimal number of features.
The coefficient of determination is utilized to evaluate the Mutation is implemented on a 𝛽𝛼h to evaluate a child vec-
significance of the computed fused high sub-bands. Low tor Πh𝛼 . In this paper, following mutation is used:
sub-bands fusion is achieved by using the local energy
Πh𝛼 = 𝛽d1 (1)
h h h
+ mr ⋅ (𝛽d2 − 𝛽d3 )
function. Inverse NSCT is utilized to compute the multi-
modality fused image. Figure 1 shows the step by step Here, 𝛼 shows index values. di ≠ 𝛼 ∀i = 1 ∶ 3. d1, d2 , and d3
methodology of the proposed model. are random numbers selected from [1, tp , ].
Crossover is used to obtain the news solutions. A child 𝜖𝛼h
3.1 Nonsubsampled contourlet transform can be obtained from ∀ 𝛽𝛼h , as:
{
Nonsubsampled Contourlet Transform (NSCT) is a well- Πh𝛼 , 𝛽𝜅 ≤ cr or 𝜅 = 𝜅dn
𝜖𝛼h = 𝜅 𝜅 = 1, 2, … , D, (2)
known transform used to decompose the images into the 𝜅 𝛽𝛼h , otherwise
𝜅
wavelet domain. It is a shift-invariant transform which can
provide rich directional details. This directionality is effec- where D shows dimensions of the problem. 𝛽𝜅 ∈ [0, 1] and
tive to convert the transformed images to the actual one 𝜅dn ∈ [1, D].
with minimum root mean square error (for more details III. Selection: A child vector 𝜖𝛼h can be drawn by consid-
please see Da Cunha et al. 2006). ering its parent vector 𝛽𝛼h as:
{ h
𝜖𝛼 , f (𝜖𝛼h ) ≤ f (𝛽𝛼h )
(3)
h+1
3.2 Feature extraction using deep Xception model 𝛽𝛼 =
𝛽𝛼h , otherwise

CNN may suffer from the under-fitting issue, as many IV. Stopping condition: If number of functional evalua-
potential features may not be extracted. To overcome this tions are lesser than the total available evaluation then Steps
issue, an extreme version of the Inception (Xception) II and III will be repeated.
model is used. Figure 2 represents the block diagram of
the xception model (for mathematical and other informa- 3.4 Fusion of high sub‑bands
tion please see Chollet 2017).
Both high sub-bands of source images are placed in The extracted and selected features using the Xception
parallel fashion in the Xception model. Consider 𝜂I1 (p, q) model from high sub-bands are then fused by using the coef-
and 𝜂I2 (p, q) are the obtained features from respective high ficient of determination (R). R between 𝜂I1 (p, q) and 𝜂I2 (p, q)
sub-bands by using the Xception model. can be computed as:
RN (𝜂I1 , 𝜂I2 )
�∑ �2
3.3 Feature selection using multi‑objective m ∑n
(𝜂I1 (p, q) − 𝜂I1 )(𝜂I2 (p, q) − 𝜂I2 )
differential evolution =
p=1 q=1
�∑ ∑
∑m ∑n m n
p=1 p=1
(𝜂I1 (p, q) − 𝜂I1 )2 × p=1 p=1
(𝜂I2 (p, q) − 𝜂I2 )2
In this step, the optimal features are selected from the (4)
features obtained from the Xception model. The fusion
Here, 𝜂I1 and 𝜂I2 shows the average of high sub-bands,
factor and entropy metrics are used as the fitness function
respectively.
to select the optimal features. A multi-objective differen-
The dominated features are preserved in the obtained fea-
tial evolution can solve many computationally complex
ture maps as:
problems (Babu et al. 2005). It can significantly balance
the fast convergence and population diversity. It can be Fs (p, q) = max(sI1 × RN + sI2 × (1 − RN )) (5)
described in the following steps:

13
2486 M. Kaur, D. Singh

Fig. 1  Diagrammatic flow of the proposed multi-objective differential evolution based deep transfer learning model for multi-modality image
fusion

∑∑
Here, sI1 and sI2 show high sub-bands of I1 and I2 , 𝜒I (p, q) = |I(p + p� , q + q� )|
respectively. (6)
p� ∈𝛾 q� ∈𝛿

3.5 Fusion of low sub‑bands Here, I = I1 or I2 . 𝛾 × 𝛿 represents neighbors of patch


placed at (p, q). size of local patch is assigned as 5 × 5.
Motivated from (Hermessi et al. 2018), local energy is used Fused coefficients of low sub-bands can be computed as:
to fuse the low sub-bands as:

13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2487

Fig. 2  Architecture of Xception model (obtained from Chollet 2017)

{
𝜓I1 (p, q) |𝜒I1 (p, q)| ≥ |𝜒I2 (p, q)| 4.1 Visual analysis
𝜓f (p, q) =
𝜓I2 (p, q) |𝜒I1 (p, q)| < |𝜒I2 (p, q)| (7)
Figures 3 and 4 represent the source images and their
respective multi-modality fused biomedical images
obtained from the LEPN (Zhu et al. 2019), DSA (Zhu et al.
4 Experimental results
2020), CNN (Kumar et al. 2020), EGDM (Lu et al. 2014),
DDcGAN (Ma et al. 2020), 3GANs (Wang et al. 2019)
To evaluate the performance of the proposed approach,
and the proposed approach. It is clearly shown that the
benchmark multi-modality biomedical images dataset is
obtained results have better modality as compared to the
obtained from Ullah et al. (2020). Fifteen different pairs
competitive approaches. Although, the existing approaches
of modality images are taken for comparative purposes.
such as LEPN (Zhu et al. 2019), DSA (Zhu et al. 2020),
The main goal is to fuse these images to obtain multi-
CNN (Kumar et al. 2020), EGDM (Lu et al. 2014), DDc-
modality fused images. To draw the comparisons, six com-
GAN (Ma et al. 2020), and 3GANs (Wang et al. 2019)
petitive multi-modality biomedical fusion approaches such
provide significant visual results but have little edge and
as LEPN (Zhu et al. 2019), DSA (Zhu et al. 2020), CNN
texture distortion. Figures 3i and 4j show the obtained
(Kumar et al. 2020), EGDM (Lu et al. 2014), DDcGAN
results from the proposed approach. These images prove
(Ma et al. 2020), and 3GANs (Wang et al. 2019) are also
that the proposed approach provides a better visual appear-
implemented on the same set of images. The hyper-param-
ance of the obtained multi-modality fused images.
eters of these approaches are assigned as mentioned in
their respective papers.

13
2488 M. Kaur, D. Singh

Fig. 3  Analysis of multi-


modality biomedical fusion
approaches: a MRI, b CT, c
LEPN (Zhu et al. 2019), d
DSA (Zhu et al. 2020), e CNN
(Kumar et al. 2020), f EGDM
(Lu et al. 2014), g DDcGAN
(Ma et al. 2020), and h 3GANs
(Wang et al. 2019) and i pro-
posed approach

4.2 Quantitative analysis biomedical image fusion approaches. It is found that the


proposed approach provides 1.8343% improvement over the
In this section, we have compared the proposed approach best available approaches.
with the existing approaches such as LEPN (Zhu et al. 2019), Mutual information represents the preserved details from
DSA (Zhu et al. 2020), CNN (Kumar et al. 2020), EGDM the sourced image in the fused image. Therefore, it is desir-
(Lu et al. 2014), DDcGAN (Ma et al. 2020), and 3GANs able to be maximum. Table 2 depicts the mutual informa-
(Wang et al. 2019), by considering the some well-known tion analysis of the proposed approach over the competi-
performance metrics. The selected performance measures tive approaches. The proposed approach shows an average
are as edge strength, fusion symmetry, entropy, and fusion improvement of 1.8373%.
factor (for mathematical information see Prakash et al. The fusion factor is a well-known performance metric
2019). that shows the strength of the fusion process. It is desirable
A multi-modality biomedical image fusion approach gen- to be maximum. Table 3 shows the fusion factor analysis
erally provides significant entropy values. Table 1 depicts of the proposed and competitive approaches. The proposed
entropy analysis of the proposed deep transfer learning- approach shows an average improvement of 1.3928% over
based multi-modality biomedical image fusion approach. the competitive fusion models.
It shows that the proposed approach provides significantly Fusion symmetry evaluates the symmetric details between
more entropy values than the existing multi-modality source and fused images. It is desirable to be maximum.

13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2489

Fig. 4  Analysis of multi-


modality biomedical fusion
approaches: a MRI, b CT, c
LEPN (Zhu et al. 2019), d
DSA (Zhu et al. 2020), e CNN
(Kumar et al. 2020), f EGDM
(Lu et al. 2014), g DDcGAN
(Ma et al. 2020), and h 3GANs
(Wang et al. 2019) and i pro-
posed approach

Table 4 shows fusion symmetry analysis of the proposed deep 5 Conclusion


transfer learning model based multi-modality fusion model. It
is found that the proposed model achieves an average improve- From the literature review, it has been found that multi-
ment of 1.1974% over the competitive models. modality image fusion is still an open area for research.
Edge strength evaluates the degree of edge preservation and The deep learning-based fusion approaches are found to
it is desirable to be maximum (Xydeas and Petrovic 2000). be one of the best promising techniques to obtain better
Table 5 shows the edge strength analysis of the proposed deep multi-modality fused biomedical images. However, these
transfer learning-based multi-modality image fusion model. approaches are computationally complex in nature and
The proposed model achieves an average improvement of also still suffer from the under-fitting issue. The proposed
1.6928% over the competitive approaches. approach, initially, decomposes the image into sub-bands

13
2490 M. Kaur, D. Singh

Table 1  Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of entropy (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN (Kumar EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
et al. 2020) (Ma et al. (Wang et al.
2020) 2019)

G1 6.22 ± 0.75 6.33 ± 0.69 6.25 ± 0.67 6.56 ± 0.55 6.43 ± 0.68 6.73 ± 0.09 6.77 ± 0.35
G2 6.18 ± 0.64 6.04 ± 0.62 6.18 ± 0.54 6.23 ± 0.45 6.73 ± 0.58 6.33 ± 0.08 6.77 ± 0.43
G3 6.43 ± 0.75 6.35 ± 0.55 5.95 ± 0.45 6.48 ± 0.54 6.68 ± 0.55 6.42 ± 0.45 6.71 ± 0.41
G4 6.09 ± 0.55 5.93 ± 0.55 6.29 ± 0.55 6.46 ± 0.45 6.47 ± 0.58 6.62 ± 0.57 6.69 ± 0.45
G5 6.38 ± 0.72 6.44 ± 0.75 6.64 ± 0.64 6.26 ± 0.56 6.66 ± 0.47 6.57 ± 0.45 6.79 ± 0.39
G6 6.28 ± 0.73 5.94 ± 0.65 6.39 ± 0.45 6.31 ± 0.44 6.53 ± 0.55 6.66 ± 0.35 6.73 ± 0.45
G7 6.17 ± 0.66 5.96 ± 0.75 6.26 ± 0.55 6.58 ± 0.45 6.69 ± 0.68 6.65 ± 0.35 6.72 ± 0.35
G8 5.91 ± 0.55 5.96 ± 0.55 6.06 ± 0.49 6.46 ± 0.46 6.28 ± 0.48 6.57 ± 0.35 6.81 ± 0.43
G9 5.96 ± 0.76 6.19 ± 0.65 6.33 ± 0.47 6.74 ± 0.62 6.26 ± 0.52 6.75 ± 0.48 6.71 ± 0.41
G10 5.93 ± 0.75 5.97 ± 0.53 6.38 ± 0.45 6.36 ± 0.54 6.79 ± 0.68 6.22 ± 0.53 6.66 ± 0.45
G11 6.05 ± 0.66 6.33 ± 0.75 6.16 ± 0.47 6.77 ± 0.69 6.55 ± 0.43 6.77 ± 0.54 6.69 ± 0.39
G12 6.18 ± 0.75 5.98 ± 0.55 6.82 ± 0.55 6.31 ± 0.64 6.44 ± 0.47 6.46 ± 0.57 6.75 ± 0.43
G13 6.12 ± 0.65 6.32 ± 0.75 6.19 ± 0.66 6.39 ± 0.45 6.37 ± 0.55 6.72 ± 0.45 6.65 ± 0.41
G14 6.31 ± 0.55 5.94 ± 0.65 6.14 ± 0.45 6.39 ± 0.67 6.77 ± 0.62 6.23 ± 0.45 6.75 ± 0.45
G15 5.96 ± 0.75 6.38 ± 0.75 5.93 ± 0.53 6.64 ± 0.64 6.44 ± 0.69 6.25 ± 0.57 6.78 ± 0.39

Table 2  Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of mutual information (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)

G1 0.63 ± 0.013 0.62 ± 0.011 0.61 ± 0.012 0.59 ± 0.012 0.61 ± 0.008 0.62 ± 0.009 0.63 ± 0.009
G2 0.61 ± 0.014 0.61 ± 0.012 0.62 ± 0.012 0.61 ± 0.009 0.61 ± 0.011 0.64 ± 0.009 0.61 ± 0.009
G3 0.63 ± 0.014 0.62 ± 0.011 0.62 ± 0.008 0.63 ± 0.008 0.59 ± 0.011 0.63 ± 0.008 0.63 ± 0.007
G4 0.61 ± 0.013 0.62 ± 0.012 0.63 ± 0.012 0.61 ± 0.012 0.63 ± 0.012 0.62 ± 0.008 0.63 ± 0.007
G5 0.61 ± 0.011 0.63 ± 0.012 0.63 ± 0.011 0.63 ± 0.012 0.64 ± 0.011 0.64 ± 0.012 0.63 ± 0.008
G6 0.64 ± 0.014 0.61 ± 0.013 0.63 ± 0.012 0.59 ± 0.012 0.64 ± 0.011 0.61 ± 0.008 0.64 ± 0.009
G7 0.63 ± 0.014 0.59 ± 0.012 0.59 ± 0.012 0.63 ± 0.009 0.64 ± 0.009 0.59 ± 0.007 0.64 ± 0.009
G8 0.64 ± 0.014 0.62 ± 0.013 0.59 ± 0.012 0.61 ± 0.008 0.61 ± 0.009 0.61 ± 0.012 0.64 ± 0.007
G9 0.62 ± 0.011 0.59 ± 0.011 0.59 ± 0.009 0.63 ± 0.009 0.62 ± 0.011 0.59 ± 0.008 0.63 ± 0.009
G10 0.63 ± 0.015 0.63 ± 0.015 0.63 ± 0.008 0.63 ± 0.009 0.59 ± 0.011 0.61 ± 0.007 0.63 ± 0.009
G11 0.62 ± 0.012 0.62 ± 0.013 0.59 ± 0.011 0.62 ± 0.012 0.62 ± 0.008 0.62 ± 0.009 0.62 ± 0.007
G12 0.59 ± 0.011 0.59 ± 0.013 0.61 ± 0.008 0.62 ± 0.012 0.62 ± 0.013 0.62 ± 0.009 0.62 ± 0.007
G13 0.61 ± 0.012 0.63 ± 0.012 0.62 ± 0.012 0.62 ± 0.012 0.63 ± 0.008 0.63 ± 0.008 0.63 ± 0.008
G14 0.61 ± 0.013 0.63 ± 0.011 0.61 ± 0.008 0.61 ± 0.011 0.62 ± 0.011 0.64 ± 0.012 0.63 ± 0.007
G15 0.59 ± 0.015 0.64 ± 0.011 0.63 ± 0.008 0.63 ± 0.011 0.64 ± 0.011 0.59 ± 0.008 0.64 ± 0.008

using a non-subsampled contourlet transform (NSCT). approach outperforms the competitive multi-modality
Thereafter, an extreme version of the Inception (Xception) image fusion approaches in terms of various performance
has been used for feature extraction of the source images. metrics. In near future, one may use the proposed model
The multi-objective differential evolution has been used for other applications such as remote sensing images
to select the optimal features. Thereafter, to obtain the (Singh et al. 2018; Singh and Kumar 2019a), medical
fused coefficients, a coefficient of determination and the images, etc. Additionally, the proposed hyper-parameters
energy loss functions are used. Finally, the fused image tuning approach can be used to tune the hyper parameter of
has been computed by applying the inverse NSCT. Exten- the other approaches such as visibility restoration models
sive experimental results have shown that the proposed

13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2491

Table 3  Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of fusion factor (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)

G1 1.26 ± 0.026 1.22 ± 0.022 1.22 ± 0.022 1.22 ± 0.031 1.24 ± 0.022 1.26 ± 0.026 1.26 ± 0.022
G2 1.22 ± 0.022 1.26 ± 0.028 1.18 ± 0.032 1.22 ± 0.029 1.22 ± 0.026 1.28 ± 0.025 1.29 ± 0.022
G3 1.26 ± 0.022 1.24 ± 0.025 1.21 ± 0.033 1.24 ± 0.022 1.26 ± 0.022 1.24 ± 0.028 1.26 ± 0.022
G4 1.23 ± 0.021 1.22 ± 0.024 1.22 ± 0.022 1.22 ± 0.025 1.28 ± 0.031 1.26 ± 0.022 1.28 ± 0.021
G5 1.22 ± 0.03 1.24 ± 0.022 1.21 ± 0.026 1.28 ± 0.027 1.24 ± 0.024 1.24 ± 0.021 1.28 ± 0.021
G6 1.26 ± 0.024 1.22 ± 0.035 1.28 ± 0.028 1.22 ± 0.021 1.28 ± 0.028 1.28 ± 0.028 1.28 ± 0.021
G7 1.22 ± 0.028 1.24 ± 0.025 1.21 ± 0.026 1.22 ± 0.022 1.24 ± 0.031 1.24 ± 0.022 1.24 ± 0.022
G8 1.24 ± 0.021 1.22 ± 0.028 1.22 ± 0.022 1.26 ± 0.021 1.26 ± 0.026 1.24 ± 0.021 1.26 ± 0.021
G9 1.24 ± 0.031 1.26 ± 0.024 1.21 ± 0.024 1.24 ± 0.022 1.18 ± 0.022 1.27 ± 0.022 1.26 ± 0.022
G10 1.24 ± 0.022 1.28 ± 0.028 1.24 ± 0.023 1.21 ± 0.024 1.24 ± 0.024 1.27 ± 0.026 1.28 ± 0.022
G11 1.22 ± 0.026 1.28 ± 0.028 1.26 ± 0.026 1.21 ± 0.032 1.26 ± 0.022 1.24 ± 0.024 1.28 ± 0.022
G12 1.24 ± 0.031 1.26 ± 0.028 1.26 ± 0.022 1.26 ± 0.028 1.24 ± 0.032 1.26 ± 0.021 1.26 ± 0.021
G13 1.28 ± 0.021 1.28 ± 0.026 1.24 ± 0.022 1.24 ± 0.026 1.18 ± 0.024 1.27 ± 0.024 1.28 ± 0.021
G14 1.22 ± 0.022 1.24 ± 0.021 1.26 ± 0.023 1.22 ± 0.022 1.24 ± 0.024 1.28 ± 0.022 1.28 ± 0.021
G15 1.24 ± 0.028 1.22 ± 0.022 1.24 ± 0.023 1.24 ± 0.026 1.18 ± 0.024 1.22 ± 0.026 1.24 ± 0.022

Table 4  Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of fusion symmetry (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)

G1 0.31 ± 0.014 0.31 ± 0.014 0.31 ± 0.012 0.31 ± 0.012 0.32 ± 0.008 0.32 ± 0.012 0.33 ± 0.008
G2 0.31 ± 0.012 0.32 ± 0.013 0.32 ± 0.012 0.32 ± 0.013 0.31 ± 0.014 0.32 ± 0.012 0.33 ± 0.007
G3 0.29 ± 0.012 0.31 ± 0.012 0.31 ± 0.012 0.31 ± 0.014 0.32 ± 0.015 0.33 ± 0.008 0.34 ± 0.008
G4 0.32 ± 0.013 0.31 ± 0.012 0.29 ± 0.009 0.31 ± 0.011 0.31 ± 0.008 0.31 ± 0.007 0.33 ± 0.007
G5 0.31 ± 0.015 0.32 ± 0.012 0.31 ± 0.011 0.31 ± 0.011 0.32 ± 0.008 0.31 ± 0.007 0.33 ± 0.007
G6 0.31 ± 0.013 0.31 ± 0.013 0.31 ± 0.013 0.33 ± 0.008 0.31 ± 0.008 0.31 ± 0.009 0.34 ± 0.008
G7 0.29 ± 0.015 0.32 ± 0.013 0.32 ± 0.012 0.32 ± 0.014 0.31 ± 0.011 0.32 ± 0.007 0.33 ± 0.007
G8 0.31 ± 0.011 0.31 ± 0.014 0.32 ± 0.009 0.31 ± 0.012 0.31 ± 0.011 0.33 ± 0.007 0.34 ± 0.007
G9 0.29 ± 0.012 0.32 ± 0.012 0.31 ± 0.009 0.31 ± 0.013 0.32 ± 0.008 0.32 ± 0.009 0.33 ± 0.008
G10 0.31 ± 0.014 0.31 ± 0.015 0.31 ± 0.008 0.32 ± 0.014 0.31 ± 0.011 0.32 ± 0.008 0.33 ± 0.008
G11 0.28 ± 0.015 0.31 ± 0.012 0.31 ± 0.014 0.32 ± 0.015 0.32 ± 0.008 0.31 ± 0.012 0.33 ± 0.008
G12 0.29 ± 0.011 0.32 ± 0.015 0.32 ± 0.012 0.32 ± 0.012 0.32 ± 0.013 0.31 ± 0.008 0.33 ± 0.008
G13 0.31 ± 0.013 0.32 ± 0.015 0.31 ± 0.009 0.31 ± 0.012 0.33 ± 0.009 0.33 ± 0.008 0.34 ± 0.008
G14 0.32 ± 0.012 0.31 ± 0.012 0.32 ± 0.012 0.32 ± 0.011 0.32 ± 0.011 0.31 ± 0.007 0.33 ± 0.007
G15 0.31 ± 0.014 0.31 ± 0.013 0.32 ± 0.012 0.31 ± 0.013 0.31 ± 0.008 0.32 ± 0.012 0.33 ± 0.008

(Osterland and Weber 2019; Singh and Kumar 2018, 2019), deep learning models (Jaiswal et al. 2020; Basave-
2019b; Wang et al. 2019; Singh et al. 2019a, 2019b), fil- gowda and Dagnew 2020; Kaur et al. 2019, 2020; Ghosh
tering models (Gupta et al. 2019; Kaur et al. 2020; Wiens et al. 2020), etc.

13
2492 M. Kaur, D. Singh

Table 5  Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of edge strength (maximum is better)

Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)

G1 0.61 ± 0.012 0.62 ± 0.014 0.64 ± 0.012 0.63 ± 0.013 0.59 ± 0.012 0.63 ± 0.012 0.64 ± 0.008
G2 0.62 ± 0.014 0.62 ± 0.015 0.62 ± 0.012 0.62 ± 0.012 0.62 ± 0.011 0.61 ± 0.009 0.62 ± 0.009
G3 0.61 ± 0.012 0.64 ± 0.015 0.62 ± 0.008 0.63 ± 0.009 0.62 ± 0.012 0.62 ± 0.012 0.64 ± 0.008
G4 0.61 ± 0.013 0.63 ± 0.014 0.63 ± 0.012 0.63 ± 0.008 0.63 ± 0.012 0.63 ± 0.009 0.63 ± 0.008
G5 0.63 ± 0.013 0.62 ± 0.013 0.64 ± 0.012 0.62 ± 0.011 0.64 ± 0.012 0.62 ± 0.007 0.64 ± 0.007
G6 0.61 ± 0.014 0.61 ± 0.011 0.62 ± 0.011 0.62 ± 0.011 0.63 ± 0.008 0.62 ± 0.012 0.63 ± 0.008
G7 0.62 ± 0.011 0.64 ± 0.012 0.64 ± 0.011 0.63 ± 0.012 0.62 ± 0.009 0.59 ± 0.007 0.64 ± 0.007
G8 0.63 ± 0.015 0.61 ± 0.011 0.62 ± 0.011 0.62 ± 0.009 0.64 ± 0.009 0.63 ± 0.007 0.64 ± 0.007
G9 0.62 ± 0.014 0.62 ± 0.012 0.61 ± 0.008 0.62 ± 0.012 0.63 ± 0.011 0.62 ± 0.008 0.63 ± 0.008
G10 0.59 ± 0.013 0.64 ± 0.012 0.63 ± 0.008 0.62 ± 0.011 0.63 ± 0.012 0.59 ± 0.008 0.64 ± 0.008
G11 0.62 ± 0.015 0.64 ± 0.015 0.62 ± 0.011 0.61 ± 0.008 0.63 ± 0.009 0.61 ± 0.007 0.64 ± 0.007
G12 0.62 ± 0.011 0.61 ± 0.013 0.61 ± 0.013 0.63 ± 0.012 0.64 ± 0.011 0.62 ± 0.008 0.64 ± 0.008
G13 0.61 ± 0.013 0.63 ± 0.013 0.63 ± 0.008 0.61 ± 0.009 0.63 ± 0.012 0.63 ± 0.009 0.63 ± 0.008
G14 0.63 ± 0.014 0.64 ± 0.014 0.59 ± 0.012 0.64 ± 0.012 0.63 ± 0.009 0.63 ± 0.012 0.64 ± 0.009
G15 0.63 ± 0.012 0.63 ± 0.015 0.61 ± 0.008 0.62 ± 0.009 0.63 ± 0.012 0.59 ± 0.009 0.63 ± 0.008

References Hu Q, Hu S, Zhang F (2020) Multi-modality medical image fusion


based on separable dictionary learning and gabor filtering. Signal
Process Image Commun 83:115758
Algarni AD (2020) Automated medical diagnosis system based on
Jaiswal A, Gianchandani N, Singh D, Kumar V, Kaur M (2020) Clas-
multi-modality image fusion and deep learning. Wirel Pers
sification of the COVID-19 infected patients using DenseNet201
Commun 111:1033–1058
based deep transfer learning. J Biomol Struct Dyn. https​://doi.
Babu B, Chakole PG, Mubeen JS (2005) Multiobjective differential
org/10.1080/07391​102.2020.17886​42
evolution (mode) for optimization of adiabatic styrene reactor.
James AP, Dasarathy BV (2014) Medical image fusion: a survey of the
Chem Eng Sci 60(17):4822–4837
state of the art. Inf Fusion 19:4–19
Basavegowda HS, Dagnew G (2020) Deep learning approach for
Kaur M, Singh D (2019) Fusion of medical images using deep belief
microarray cancer data classification. CAAI Trans Intell Technol
networks. Clust Comput. https​://doi.org/10.1007/s1058​6-019-
5(1):22–33
02999​-x
Chollet F (2017) Xception: deep learning with depthwise separable
Kaur M, Gianey HK, Singh D, Sabharwal M (2019) Multi-objective
convolutions. In: Proceedings of the IEEE conference on com-
differential evolution based random forest for e-health applica-
puter vision and pattern recognition, pp 1251–1258
tions. Mod Phys Lett B 33(05):1950022
Da Cunha AL, Zhou J, Do MN (2006) The nonsubsampled contourlet
Kaur M, Singh D, Kumar V, Sun K (2020) Color image dehazing using
transform: theory, design, and applications. IEEE Trans Image
gradient channel prior and guided l0 filter. Inf Sci 521:326–342.
Process 15(10):3089–3101
https​://doi.org/10.1016/j.ins.2020.02.048
Daniel E, Anitha J, Kamaleshwaran K, Rani I (2017) Optimum spec-
Kaur M, Singh D, Uppal R Singh (2020) Parallel strength pareto evolu-
trum mask based medical image fusion using gray wolf optimiza-
tionary algorithm-II based image encryption. IET Image Process
tion. Biomed Signal Process Control 34:36–43
14(6):1015–1026
Du J, Li W, Lu K, Xiao B (2016) An overview of multi-modal medical
Kumar A, Fulham M, Feng D, Kim J (2020) Co-learning feature fusion
image fusion. Neurocomputing 215:3–20
maps from PET-CT images of lung cancer. IEEE Trans Med Imag-
Gai D, Shen X, Cheng H, Chen H (2019) Medical image fusion via
ing 39(1):204–217. https​://doi.org/10.1109/TMI.2019.29236​01
pcnn based on edge preservation and improved sparse represen-
Lifeng Y, Donglin Z, Weidong W, Shanglian B (2001) Multi-modality
tation in nsst domain. IEEE Access 7:85413–85429. https​://doi.
medical image fusion based on wavelet analysis and quality evalu-
org/10.1109/ACCES​S.2019.29254​24
ation. J Syst Eng Electron 12(1):42–48
Ghosh S, Shivakumara P, Roy P, Pal U, Lu T (2020) Graphology based
Liu X, Mei W, Du H (2018) Multi-modality medical image fusion
handwritten character analysis for human behaviour identification.
based on image decomposition framework and nonsubsampled
CAAI Trans Intell Technol 5(1):55–65
shearlet transform. Biomed Signal Process Control 40:343–350
Gupta B, Tiwari M, Lamba SS (2019) Visibility improvement and mass
Liu Z, Wu J, Fu L, Majeed Y, Feng Y, Li R, Cui Y (2020) Improved
segmentation of mammogram images using quantile separated
kiwifruit detection using pre-trained VGG16 with RGB and
histogram equalisation with local contrast enhancement. CAAI
NIR information fusion. IEEE Access 8:2327–2336. https​://doi.
Trans Intell Technol 4(2):73–79
org/10.1109/ACCES​S.2019.29625​13
Hermessi H, Mourali O, Zagrouba E (2018) Convolutional neural
Lu Y, Zhao J, Wang G (2014) Edge-guided dual-modality image recon-
network-based multimodal image fusion via similarity learning
struction. IEEE Access 2:1359–1363. https​://doi.org/10.1109/
in the shearlet domain. Neural Comput Appl 30(7):2029–2045
ACCES​S.2014.23719​94

13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2493

Ma J, Xu H, Jiang J, Mei X, Zhang X (2020) Ddcgan: a dual-discrim- sets and novel sum-modified-laplacian in non-subsampled shear-
inator conditional generative adversarial network for multi-reso- let transform domain. Biomed Signal Process Control 57:101724
lution image fusion. IEEE Trans Image Process 29:4980–4995. Wang C, Zhao Z, Ren Q, Xu Y, Yu Y (2019) Multi-modality ana-
https​://doi.org/10.1109/TIP.2020.29775​73 tomical and functional medical image fusion based on simpli-
Osterland S, Weber J (2019) Analytical analysis of single-stage pres- fied-spatial frequency-pulse coupled neural networks and region
sure relief valves. Int J Hydromech 2(1):32–53 energy-weighted average strategy in non-sub sampled contourlet
Pannu HS, Singh D, Malhi AK (2018) Improved particle swarm opti- transform domain. J Med Imaging Health Inform 9(5):1017–1027
mization based adaptive neuro-fuzzy inference system for benzene Wang Y, Zhou L, Yu B, Wang L, Zu C, Lalush DS, Lin W, Wu X,
detection. CLEAN-Soil Air Water 46(5):1700162 Zhou J, Shen D (2019) 3d auto-context-based locality adaptive
Pannu HS, Singh D, Malhi AK (2019) Multi-objective particle swarm multi-modality gans for pet synthesis. IEEE Trans Med Imaging
optimization-based adaptive neuro-fuzzy inference system for 38(6):1328–1339. https​://doi.org/10.1109/TMI.2018.28840​53
benzene monitoring. Neural Comput Appl 31:2195–2205 Wang R, Yu H, Wang G, Zhang G, Wang W (2019) Study on the
Prakash O, Park CM, Khare A, Jeon M, Gwak J (2019) Multiscale dynamic and static characteristics of gas static thrust bearing with
fusion of multimodal medical images using lifting scheme based micro-hole restrictors. Int J Hydromech 2(3):189–202
biorthogonal wavelet transform. Optik 182:995–1014 Wang K, Zheng M, Wei H, Qi G, Li Y (2020) Multi-modality medi-
Ravi P, Krishnan J (2018) Image enhancement with medical image cal image fusion using convolutional neural network and contrast
fusion using multiresolution discrete cosine transform. In: Mate- pyramid. Sensors 20(8):2169
rials today: proceedings 5 (1, part 1) 1936 – 1942, international Wiens T (2019) Engine speed reduction for hydraulic machinery using
conference on processing of materials, minerals and energy (July predictive algorithms. Int J Hydromech 2(1):16–31
29th–30th) 2016, Ongole, Andhra Pradesh, India Xia K-J, Yin H-S, Wang J-Q (2019) A novel improved deep convo-
Singh D, Kumar V (2018) Dehazing of outdoor images using lutional neural network model for medical image fusion. Clust
notch based integral guided filter. Multimed Tools Appl Comput 22(1):1515–1527
77(20):27363–27386 Xydeas C, Petrovic V (2000) Objective image fusion performance
Singh D, Kumar V (2019a) A comprehensive review of compu- measure. Electron Lett 36(4):308–309
tational dehazing techniques. Arch Comput Methods Eng Zhou T, Ruan S, Canu S (2019) A review: deep learning for medical
26(5):1395–1413 image segmentation using multi-modality fusion. Array 3:100004
Singh D, Kumar V (2019b) Single image defogging by gain gradient Zhu Z, Chai Y, Yin H, Li Y, Liu Z (2016) A novel dictionary learning
image filter. Sci China Inf Sci 62(7):79101 approach for multi-modality medical image fusion. Neurocomput-
Singh D, Kaur M, Singh H (2018) Remote sensing image fusion ing 214:471–482
using fuzzy logic and gyrator transform. Remote Sens Lett Zhu Z, Zheng M, Qi G, Wang D, Xiang Y (2019) A phase congruency
9(10):942–951 and local laplacian energy based multi-modality medical image
Singh D, Kumar V, Kaur M (2019a) Single image dehazing using gra- fusion method in NSCT domain. IEEE Access 7:20811–20824.
dient channel prior. Appl Intell 49(12):4276–4293 https​://doi.org/10.1109/ACCES​S.2019.28981​11
Singh D, Kumar V, Kaur M (2019b) Image dehazing using window- Zhu R, Li X, Zhang X, Ma M (2020) Mri and ct medical image fusion
based integrated means filter. Multimed Tools Appl. https:​ //doi. based on synchronized-anisotropic diffusion model. IEEE Access
org/10.1007/s1104​2-019-08286​-6 8:91336–91350. https​://doi.org/10.1109/ACCES​S.2020.29934​93
Tavard F, Simon A, Leclercq C, Donal E, Hernández AI, Garreau M
(2014) Multimodal registration and data fusion for cardiac resyn- Publisher’s Note Springer Nature remains neutral with regard to
chronization therapy optimization. IEEE Trans Med Imaging jurisdictional claims in published maps and institutional affiliations.
33(6):1363–1372. https​://doi.org/10.1109/TMI.2014.23116​94
Ullah H, Ullah B, Wu L, Abdalla FY, Ren G, Zhao Y (2020) Multi-
modality medical images fusion based on local-features fuzzy

13

You might also like