2 - Multi Modality Medical Image Fusion Technique Using Multi Objective Differential Evolution Based Deep Neural Networks
2 - Multi Modality Medical Image Fusion Technique Using Multi Objective Differential Evolution Based Deep Neural Networks
https://doi.org/10.1007/s12652-020-02386-0
ORIGINAL RESEARCH
Received: 28 August 2019 / Accepted: 22 July 2020 / Published online: 8 August 2020
© Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract
The advancements in automated diagnostic tools allow researchers to obtain more and more information from medical images.
Recently, to obtain more informative medical images, multi-modality images have been used. These images have significantly
more information as compared to traditional medical images. However, the construction of multi-modality images is not
an easy task. The proposed approach, initially, decomposes the image into sub-bands using a non-subsampled contourlet
transform (NSCT) domain. Thereafter, an extreme version of the Inception (Xception) is used for feature extraction of the
source images. The multi-objective differential evolution is used to select the optimal features. Thereafter, the coefficient of
determination and the energy loss based fusion functions are used to obtain the fused coefficients. Finally, the fused image
is computed by applying the inverse NSCT. Extensive experimental results show that the proposed approach outperforms
the competitive multi-modality image fusion approaches.
13
Vol.:(0123456789)
2484 M. Kaur, D. Singh
a deep transfer learning-based multi-modality biomedical images. The pyramid wavelet used for the fusion process.
fusion approach can provide better results. Ma et al. (2020) designed a dual-discriminator conditional
The main contributions are summarized as: generative adversarial network (DDcGAN) to obtain multi-
modality fused images. It obtained a real-like fused image
• A multi-objective differential evolution and Xception by using the content loss to dupe both discriminators. Two
model based multi-modality biomedical fusion model is discriminators were also considered intention to differenti-
proposed. ate the composition variations between the fused and source
• The proposed approach, initially, decomposes the image images, respectively.
into subbands using a non-subsampled contourlet trans- Wang et al. (2019) developed a 3D auto-context-based
form (NSCT). locality adaptive multi-modality GANs (3GANs) to obtain
• An extreme version of the Inception (Xception) is then more efficient multi-modality fused images. A non-unified
used for feature extraction of the source images. kernel was also used along with the adaptive approach
• The multi-objective differential evolution is used to select for multimodality fusion. Gai et al. (2019) utilized pulse
the optimal features. coupled neural network (PCNN) by considering the edge
• To obtain the fused coefficients, a coefficient of deter- preservation and enhanced sparse representation in the non-
mination and the energy loss based fusion functions are subsampled shearlet transform (NSST). It completely uti-
used. lized the features of various modalities that can handle edge
• Finally, a fused image is computed by applying the details well and improves the results.
inverse NSCT. Liu et al. (2020) utilized a VGG16 based deep transfer
• The proposed and the competitive approaches are com- learning approaches for image fusion to improve the clas-
pared by considering the benchmark multi-modality sification process. VGG16 can obtain more efficient features
image fusion dataset. as compared to the traditional deep learning approaches. The
obtained features were then used to fuse the multi-modality
The remaining structure of this paper is organized as: Exist- biomedical images. Tavard et al. (2014) designed a multi-
ing literature is presented in Sect. 2. The proposed medical modal registration and fusion approach to improve the car-
image fusion is illustrated in Sect. 3. Experimental results diac re-synchronization. The approach helps in improving
and comparative analyses are discussed in Sect. 4. Section 5 therapy optimization.
concludes the proposed work. Zhu et al. (2016) designed a novel dictionary learning-
based image fusion approach for multi-modality biomedical
images. Due to the use of dictionary learning, this approach
2 Literature review achieves higher accuracy. But it is computationally com-
plex in nature. Liu et al. (2018) proposed the biomedical
Shu et al. (Zhu et al. 2019) implemented local laplacian image decomposition approach using NSST to fuse the
energy and phase congruency based fusion approach in multi-modality images. This approach has shown signifi-
the NSCT domain (LEPN). Local laplacian energy utilized cant results over the existing approaches, but, suffer from
weighted local energy and sum of laplacian coefficients to the edge degradation issue. Wang et al. (2020) utilized CNN
obtain the regulated details and features of input images. and contrast pyramid to fuse the biomedical multi-modality
Zhu et al. (2020) designed a diffusion-based approach by images. CNN can fuse the images efficiently. However, it is
using the synchronized-anisotropic operators (DSA). A max- a computationally extensive approach, and some time also
imum absolute value constraint was also utilized for base not provide promising results if the biomedical images are
layers fusion. The fusion decision map was computed by very similar to each other.
considering the sum of the modified anisotropic Laplacian From the literature review, it has been observed that
approach by using the similar corrosion sub-bands obtained multi-modality image fusion is still an open area for
from the anisotropic diffusion. Kumar et al. (2020) proposed research. The deep learning is found to be the best promis-
a co-learning based fusion maps for obtained more efficient ing techniques to obtain better multi-modality fused bio-
multi-modality fused biomedical images. A convolutional medical images. But these approaches can provide better
neural network (CNN) was also used for the prediction and results if pre-trained deep transfer learning approaches are
segmentation of potential objects. used. Additionally, the initial parameter selection of the
Lu et al. (2014) designed an edge-guided dual-modality deep learning and deep transfer learning approach is also a
(EGDM) approach to obtain the multi-modality images. challenging issue (Pannu et al. 2019; Kaur and Singh 2019;
It performs significantly better even on highly under-sam- Pannu et al. 2018). Therefore, in this paper, a well-known
pled data. Lifeng et al. (2001) utilized wavelet and qual- multi-objective differential evolution is used to enhance the
ity analysis to obtain the biomedical multi-modality fused results.
13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2485
CNN may suffer from the under-fitting issue, as many IV. Stopping condition: If number of functional evalua-
potential features may not be extracted. To overcome this tions are lesser than the total available evaluation then Steps
issue, an extreme version of the Inception (Xception) II and III will be repeated.
model is used. Figure 2 represents the block diagram of
the xception model (for mathematical and other informa- 3.4 Fusion of high sub‑bands
tion please see Chollet 2017).
Both high sub-bands of source images are placed in The extracted and selected features using the Xception
parallel fashion in the Xception model. Consider 𝜂I1 (p, q) model from high sub-bands are then fused by using the coef-
and 𝜂I2 (p, q) are the obtained features from respective high ficient of determination (R). R between 𝜂I1 (p, q) and 𝜂I2 (p, q)
sub-bands by using the Xception model. can be computed as:
RN (𝜂I1 , 𝜂I2 )
�∑ �2
3.3 Feature selection using multi‑objective m ∑n
(𝜂I1 (p, q) − 𝜂I1 )(𝜂I2 (p, q) − 𝜂I2 )
differential evolution =
p=1 q=1
�∑ ∑
∑m ∑n m n
p=1 p=1
(𝜂I1 (p, q) − 𝜂I1 )2 × p=1 p=1
(𝜂I2 (p, q) − 𝜂I2 )2
In this step, the optimal features are selected from the (4)
features obtained from the Xception model. The fusion
Here, 𝜂I1 and 𝜂I2 shows the average of high sub-bands,
factor and entropy metrics are used as the fitness function
respectively.
to select the optimal features. A multi-objective differen-
The dominated features are preserved in the obtained fea-
tial evolution can solve many computationally complex
ture maps as:
problems (Babu et al. 2005). It can significantly balance
the fast convergence and population diversity. It can be Fs (p, q) = max(sI1 × RN + sI2 × (1 − RN )) (5)
described in the following steps:
13
2486 M. Kaur, D. Singh
Fig. 1 Diagrammatic flow of the proposed multi-objective differential evolution based deep transfer learning model for multi-modality image
fusion
∑∑
Here, sI1 and sI2 show high sub-bands of I1 and I2 , 𝜒I (p, q) = |I(p + p� , q + q� )|
respectively. (6)
p� ∈𝛾 q� ∈𝛿
13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2487
{
𝜓I1 (p, q) |𝜒I1 (p, q)| ≥ |𝜒I2 (p, q)| 4.1 Visual analysis
𝜓f (p, q) =
𝜓I2 (p, q) |𝜒I1 (p, q)| < |𝜒I2 (p, q)| (7)
Figures 3 and 4 represent the source images and their
respective multi-modality fused biomedical images
obtained from the LEPN (Zhu et al. 2019), DSA (Zhu et al.
4 Experimental results
2020), CNN (Kumar et al. 2020), EGDM (Lu et al. 2014),
DDcGAN (Ma et al. 2020), 3GANs (Wang et al. 2019)
To evaluate the performance of the proposed approach,
and the proposed approach. It is clearly shown that the
benchmark multi-modality biomedical images dataset is
obtained results have better modality as compared to the
obtained from Ullah et al. (2020). Fifteen different pairs
competitive approaches. Although, the existing approaches
of modality images are taken for comparative purposes.
such as LEPN (Zhu et al. 2019), DSA (Zhu et al. 2020),
The main goal is to fuse these images to obtain multi-
CNN (Kumar et al. 2020), EGDM (Lu et al. 2014), DDc-
modality fused images. To draw the comparisons, six com-
GAN (Ma et al. 2020), and 3GANs (Wang et al. 2019)
petitive multi-modality biomedical fusion approaches such
provide significant visual results but have little edge and
as LEPN (Zhu et al. 2019), DSA (Zhu et al. 2020), CNN
texture distortion. Figures 3i and 4j show the obtained
(Kumar et al. 2020), EGDM (Lu et al. 2014), DDcGAN
results from the proposed approach. These images prove
(Ma et al. 2020), and 3GANs (Wang et al. 2019) are also
that the proposed approach provides a better visual appear-
implemented on the same set of images. The hyper-param-
ance of the obtained multi-modality fused images.
eters of these approaches are assigned as mentioned in
their respective papers.
13
2488 M. Kaur, D. Singh
13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2489
13
2490 M. Kaur, D. Singh
Table 1 Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of entropy (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN (Kumar EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
et al. 2020) (Ma et al. (Wang et al.
2020) 2019)
G1 6.22 ± 0.75 6.33 ± 0.69 6.25 ± 0.67 6.56 ± 0.55 6.43 ± 0.68 6.73 ± 0.09 6.77 ± 0.35
G2 6.18 ± 0.64 6.04 ± 0.62 6.18 ± 0.54 6.23 ± 0.45 6.73 ± 0.58 6.33 ± 0.08 6.77 ± 0.43
G3 6.43 ± 0.75 6.35 ± 0.55 5.95 ± 0.45 6.48 ± 0.54 6.68 ± 0.55 6.42 ± 0.45 6.71 ± 0.41
G4 6.09 ± 0.55 5.93 ± 0.55 6.29 ± 0.55 6.46 ± 0.45 6.47 ± 0.58 6.62 ± 0.57 6.69 ± 0.45
G5 6.38 ± 0.72 6.44 ± 0.75 6.64 ± 0.64 6.26 ± 0.56 6.66 ± 0.47 6.57 ± 0.45 6.79 ± 0.39
G6 6.28 ± 0.73 5.94 ± 0.65 6.39 ± 0.45 6.31 ± 0.44 6.53 ± 0.55 6.66 ± 0.35 6.73 ± 0.45
G7 6.17 ± 0.66 5.96 ± 0.75 6.26 ± 0.55 6.58 ± 0.45 6.69 ± 0.68 6.65 ± 0.35 6.72 ± 0.35
G8 5.91 ± 0.55 5.96 ± 0.55 6.06 ± 0.49 6.46 ± 0.46 6.28 ± 0.48 6.57 ± 0.35 6.81 ± 0.43
G9 5.96 ± 0.76 6.19 ± 0.65 6.33 ± 0.47 6.74 ± 0.62 6.26 ± 0.52 6.75 ± 0.48 6.71 ± 0.41
G10 5.93 ± 0.75 5.97 ± 0.53 6.38 ± 0.45 6.36 ± 0.54 6.79 ± 0.68 6.22 ± 0.53 6.66 ± 0.45
G11 6.05 ± 0.66 6.33 ± 0.75 6.16 ± 0.47 6.77 ± 0.69 6.55 ± 0.43 6.77 ± 0.54 6.69 ± 0.39
G12 6.18 ± 0.75 5.98 ± 0.55 6.82 ± 0.55 6.31 ± 0.64 6.44 ± 0.47 6.46 ± 0.57 6.75 ± 0.43
G13 6.12 ± 0.65 6.32 ± 0.75 6.19 ± 0.66 6.39 ± 0.45 6.37 ± 0.55 6.72 ± 0.45 6.65 ± 0.41
G14 6.31 ± 0.55 5.94 ± 0.65 6.14 ± 0.45 6.39 ± 0.67 6.77 ± 0.62 6.23 ± 0.45 6.75 ± 0.45
G15 5.96 ± 0.75 6.38 ± 0.75 5.93 ± 0.53 6.64 ± 0.64 6.44 ± 0.69 6.25 ± 0.57 6.78 ± 0.39
Table 2 Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of mutual information (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)
G1 0.63 ± 0.013 0.62 ± 0.011 0.61 ± 0.012 0.59 ± 0.012 0.61 ± 0.008 0.62 ± 0.009 0.63 ± 0.009
G2 0.61 ± 0.014 0.61 ± 0.012 0.62 ± 0.012 0.61 ± 0.009 0.61 ± 0.011 0.64 ± 0.009 0.61 ± 0.009
G3 0.63 ± 0.014 0.62 ± 0.011 0.62 ± 0.008 0.63 ± 0.008 0.59 ± 0.011 0.63 ± 0.008 0.63 ± 0.007
G4 0.61 ± 0.013 0.62 ± 0.012 0.63 ± 0.012 0.61 ± 0.012 0.63 ± 0.012 0.62 ± 0.008 0.63 ± 0.007
G5 0.61 ± 0.011 0.63 ± 0.012 0.63 ± 0.011 0.63 ± 0.012 0.64 ± 0.011 0.64 ± 0.012 0.63 ± 0.008
G6 0.64 ± 0.014 0.61 ± 0.013 0.63 ± 0.012 0.59 ± 0.012 0.64 ± 0.011 0.61 ± 0.008 0.64 ± 0.009
G7 0.63 ± 0.014 0.59 ± 0.012 0.59 ± 0.012 0.63 ± 0.009 0.64 ± 0.009 0.59 ± 0.007 0.64 ± 0.009
G8 0.64 ± 0.014 0.62 ± 0.013 0.59 ± 0.012 0.61 ± 0.008 0.61 ± 0.009 0.61 ± 0.012 0.64 ± 0.007
G9 0.62 ± 0.011 0.59 ± 0.011 0.59 ± 0.009 0.63 ± 0.009 0.62 ± 0.011 0.59 ± 0.008 0.63 ± 0.009
G10 0.63 ± 0.015 0.63 ± 0.015 0.63 ± 0.008 0.63 ± 0.009 0.59 ± 0.011 0.61 ± 0.007 0.63 ± 0.009
G11 0.62 ± 0.012 0.62 ± 0.013 0.59 ± 0.011 0.62 ± 0.012 0.62 ± 0.008 0.62 ± 0.009 0.62 ± 0.007
G12 0.59 ± 0.011 0.59 ± 0.013 0.61 ± 0.008 0.62 ± 0.012 0.62 ± 0.013 0.62 ± 0.009 0.62 ± 0.007
G13 0.61 ± 0.012 0.63 ± 0.012 0.62 ± 0.012 0.62 ± 0.012 0.63 ± 0.008 0.63 ± 0.008 0.63 ± 0.008
G14 0.61 ± 0.013 0.63 ± 0.011 0.61 ± 0.008 0.61 ± 0.011 0.62 ± 0.011 0.64 ± 0.012 0.63 ± 0.007
G15 0.59 ± 0.015 0.64 ± 0.011 0.63 ± 0.008 0.63 ± 0.011 0.64 ± 0.011 0.59 ± 0.008 0.64 ± 0.008
using a non-subsampled contourlet transform (NSCT). approach outperforms the competitive multi-modality
Thereafter, an extreme version of the Inception (Xception) image fusion approaches in terms of various performance
has been used for feature extraction of the source images. metrics. In near future, one may use the proposed model
The multi-objective differential evolution has been used for other applications such as remote sensing images
to select the optimal features. Thereafter, to obtain the (Singh et al. 2018; Singh and Kumar 2019a), medical
fused coefficients, a coefficient of determination and the images, etc. Additionally, the proposed hyper-parameters
energy loss functions are used. Finally, the fused image tuning approach can be used to tune the hyper parameter of
has been computed by applying the inverse NSCT. Exten- the other approaches such as visibility restoration models
sive experimental results have shown that the proposed
13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2491
Table 3 Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of fusion factor (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)
G1 1.26 ± 0.026 1.22 ± 0.022 1.22 ± 0.022 1.22 ± 0.031 1.24 ± 0.022 1.26 ± 0.026 1.26 ± 0.022
G2 1.22 ± 0.022 1.26 ± 0.028 1.18 ± 0.032 1.22 ± 0.029 1.22 ± 0.026 1.28 ± 0.025 1.29 ± 0.022
G3 1.26 ± 0.022 1.24 ± 0.025 1.21 ± 0.033 1.24 ± 0.022 1.26 ± 0.022 1.24 ± 0.028 1.26 ± 0.022
G4 1.23 ± 0.021 1.22 ± 0.024 1.22 ± 0.022 1.22 ± 0.025 1.28 ± 0.031 1.26 ± 0.022 1.28 ± 0.021
G5 1.22 ± 0.03 1.24 ± 0.022 1.21 ± 0.026 1.28 ± 0.027 1.24 ± 0.024 1.24 ± 0.021 1.28 ± 0.021
G6 1.26 ± 0.024 1.22 ± 0.035 1.28 ± 0.028 1.22 ± 0.021 1.28 ± 0.028 1.28 ± 0.028 1.28 ± 0.021
G7 1.22 ± 0.028 1.24 ± 0.025 1.21 ± 0.026 1.22 ± 0.022 1.24 ± 0.031 1.24 ± 0.022 1.24 ± 0.022
G8 1.24 ± 0.021 1.22 ± 0.028 1.22 ± 0.022 1.26 ± 0.021 1.26 ± 0.026 1.24 ± 0.021 1.26 ± 0.021
G9 1.24 ± 0.031 1.26 ± 0.024 1.21 ± 0.024 1.24 ± 0.022 1.18 ± 0.022 1.27 ± 0.022 1.26 ± 0.022
G10 1.24 ± 0.022 1.28 ± 0.028 1.24 ± 0.023 1.21 ± 0.024 1.24 ± 0.024 1.27 ± 0.026 1.28 ± 0.022
G11 1.22 ± 0.026 1.28 ± 0.028 1.26 ± 0.026 1.21 ± 0.032 1.26 ± 0.022 1.24 ± 0.024 1.28 ± 0.022
G12 1.24 ± 0.031 1.26 ± 0.028 1.26 ± 0.022 1.26 ± 0.028 1.24 ± 0.032 1.26 ± 0.021 1.26 ± 0.021
G13 1.28 ± 0.021 1.28 ± 0.026 1.24 ± 0.022 1.24 ± 0.026 1.18 ± 0.024 1.27 ± 0.024 1.28 ± 0.021
G14 1.22 ± 0.022 1.24 ± 0.021 1.26 ± 0.023 1.22 ± 0.022 1.24 ± 0.024 1.28 ± 0.022 1.28 ± 0.021
G15 1.24 ± 0.028 1.22 ± 0.022 1.24 ± 0.023 1.24 ± 0.026 1.18 ± 0.024 1.22 ± 0.026 1.24 ± 0.022
Table 4 Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of fusion symmetry (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)
G1 0.31 ± 0.014 0.31 ± 0.014 0.31 ± 0.012 0.31 ± 0.012 0.32 ± 0.008 0.32 ± 0.012 0.33 ± 0.008
G2 0.31 ± 0.012 0.32 ± 0.013 0.32 ± 0.012 0.32 ± 0.013 0.31 ± 0.014 0.32 ± 0.012 0.33 ± 0.007
G3 0.29 ± 0.012 0.31 ± 0.012 0.31 ± 0.012 0.31 ± 0.014 0.32 ± 0.015 0.33 ± 0.008 0.34 ± 0.008
G4 0.32 ± 0.013 0.31 ± 0.012 0.29 ± 0.009 0.31 ± 0.011 0.31 ± 0.008 0.31 ± 0.007 0.33 ± 0.007
G5 0.31 ± 0.015 0.32 ± 0.012 0.31 ± 0.011 0.31 ± 0.011 0.32 ± 0.008 0.31 ± 0.007 0.33 ± 0.007
G6 0.31 ± 0.013 0.31 ± 0.013 0.31 ± 0.013 0.33 ± 0.008 0.31 ± 0.008 0.31 ± 0.009 0.34 ± 0.008
G7 0.29 ± 0.015 0.32 ± 0.013 0.32 ± 0.012 0.32 ± 0.014 0.31 ± 0.011 0.32 ± 0.007 0.33 ± 0.007
G8 0.31 ± 0.011 0.31 ± 0.014 0.32 ± 0.009 0.31 ± 0.012 0.31 ± 0.011 0.33 ± 0.007 0.34 ± 0.007
G9 0.29 ± 0.012 0.32 ± 0.012 0.31 ± 0.009 0.31 ± 0.013 0.32 ± 0.008 0.32 ± 0.009 0.33 ± 0.008
G10 0.31 ± 0.014 0.31 ± 0.015 0.31 ± 0.008 0.32 ± 0.014 0.31 ± 0.011 0.32 ± 0.008 0.33 ± 0.008
G11 0.28 ± 0.015 0.31 ± 0.012 0.31 ± 0.014 0.32 ± 0.015 0.32 ± 0.008 0.31 ± 0.012 0.33 ± 0.008
G12 0.29 ± 0.011 0.32 ± 0.015 0.32 ± 0.012 0.32 ± 0.012 0.32 ± 0.013 0.31 ± 0.008 0.33 ± 0.008
G13 0.31 ± 0.013 0.32 ± 0.015 0.31 ± 0.009 0.31 ± 0.012 0.33 ± 0.009 0.33 ± 0.008 0.34 ± 0.008
G14 0.32 ± 0.012 0.31 ± 0.012 0.32 ± 0.012 0.32 ± 0.011 0.32 ± 0.011 0.31 ± 0.007 0.33 ± 0.007
G15 0.31 ± 0.014 0.31 ± 0.013 0.32 ± 0.012 0.31 ± 0.013 0.31 ± 0.008 0.32 ± 0.012 0.33 ± 0.008
(Osterland and Weber 2019; Singh and Kumar 2018, 2019), deep learning models (Jaiswal et al. 2020; Basave-
2019b; Wang et al. 2019; Singh et al. 2019a, 2019b), fil- gowda and Dagnew 2020; Kaur et al. 2019, 2020; Ghosh
tering models (Gupta et al. 2019; Kaur et al. 2020; Wiens et al. 2020), etc.
13
2492 M. Kaur, D. Singh
Table 5 Comparative analysis among the proposed deep transfer learning based multi-modality image fusion and the competitive approaches in
terms of edge strength (maximum is better)
Group LEPN (Zhu et al. 2019) DSA (Zhu et al. 2020) CNN EGDM (Lu et al. 2014) DDcGAN 3GANs Proposed
(Kumar et al. (Ma et al. (Wang et al.
2020) 2020) 2019)
G1 0.61 ± 0.012 0.62 ± 0.014 0.64 ± 0.012 0.63 ± 0.013 0.59 ± 0.012 0.63 ± 0.012 0.64 ± 0.008
G2 0.62 ± 0.014 0.62 ± 0.015 0.62 ± 0.012 0.62 ± 0.012 0.62 ± 0.011 0.61 ± 0.009 0.62 ± 0.009
G3 0.61 ± 0.012 0.64 ± 0.015 0.62 ± 0.008 0.63 ± 0.009 0.62 ± 0.012 0.62 ± 0.012 0.64 ± 0.008
G4 0.61 ± 0.013 0.63 ± 0.014 0.63 ± 0.012 0.63 ± 0.008 0.63 ± 0.012 0.63 ± 0.009 0.63 ± 0.008
G5 0.63 ± 0.013 0.62 ± 0.013 0.64 ± 0.012 0.62 ± 0.011 0.64 ± 0.012 0.62 ± 0.007 0.64 ± 0.007
G6 0.61 ± 0.014 0.61 ± 0.011 0.62 ± 0.011 0.62 ± 0.011 0.63 ± 0.008 0.62 ± 0.012 0.63 ± 0.008
G7 0.62 ± 0.011 0.64 ± 0.012 0.64 ± 0.011 0.63 ± 0.012 0.62 ± 0.009 0.59 ± 0.007 0.64 ± 0.007
G8 0.63 ± 0.015 0.61 ± 0.011 0.62 ± 0.011 0.62 ± 0.009 0.64 ± 0.009 0.63 ± 0.007 0.64 ± 0.007
G9 0.62 ± 0.014 0.62 ± 0.012 0.61 ± 0.008 0.62 ± 0.012 0.63 ± 0.011 0.62 ± 0.008 0.63 ± 0.008
G10 0.59 ± 0.013 0.64 ± 0.012 0.63 ± 0.008 0.62 ± 0.011 0.63 ± 0.012 0.59 ± 0.008 0.64 ± 0.008
G11 0.62 ± 0.015 0.64 ± 0.015 0.62 ± 0.011 0.61 ± 0.008 0.63 ± 0.009 0.61 ± 0.007 0.64 ± 0.007
G12 0.62 ± 0.011 0.61 ± 0.013 0.61 ± 0.013 0.63 ± 0.012 0.64 ± 0.011 0.62 ± 0.008 0.64 ± 0.008
G13 0.61 ± 0.013 0.63 ± 0.013 0.63 ± 0.008 0.61 ± 0.009 0.63 ± 0.012 0.63 ± 0.009 0.63 ± 0.008
G14 0.63 ± 0.014 0.64 ± 0.014 0.59 ± 0.012 0.64 ± 0.012 0.63 ± 0.009 0.63 ± 0.012 0.64 ± 0.009
G15 0.63 ± 0.012 0.63 ± 0.015 0.61 ± 0.008 0.62 ± 0.009 0.63 ± 0.012 0.59 ± 0.009 0.63 ± 0.008
13
Multi‑modality medical image fusion technique using multi‑objective differential evolution… 2493
Ma J, Xu H, Jiang J, Mei X, Zhang X (2020) Ddcgan: a dual-discrim- sets and novel sum-modified-laplacian in non-subsampled shear-
inator conditional generative adversarial network for multi-reso- let transform domain. Biomed Signal Process Control 57:101724
lution image fusion. IEEE Trans Image Process 29:4980–4995. Wang C, Zhao Z, Ren Q, Xu Y, Yu Y (2019) Multi-modality ana-
https://doi.org/10.1109/TIP.2020.2977573 tomical and functional medical image fusion based on simpli-
Osterland S, Weber J (2019) Analytical analysis of single-stage pres- fied-spatial frequency-pulse coupled neural networks and region
sure relief valves. Int J Hydromech 2(1):32–53 energy-weighted average strategy in non-sub sampled contourlet
Pannu HS, Singh D, Malhi AK (2018) Improved particle swarm opti- transform domain. J Med Imaging Health Inform 9(5):1017–1027
mization based adaptive neuro-fuzzy inference system for benzene Wang Y, Zhou L, Yu B, Wang L, Zu C, Lalush DS, Lin W, Wu X,
detection. CLEAN-Soil Air Water 46(5):1700162 Zhou J, Shen D (2019) 3d auto-context-based locality adaptive
Pannu HS, Singh D, Malhi AK (2019) Multi-objective particle swarm multi-modality gans for pet synthesis. IEEE Trans Med Imaging
optimization-based adaptive neuro-fuzzy inference system for 38(6):1328–1339. https://doi.org/10.1109/TMI.2018.2884053
benzene monitoring. Neural Comput Appl 31:2195–2205 Wang R, Yu H, Wang G, Zhang G, Wang W (2019) Study on the
Prakash O, Park CM, Khare A, Jeon M, Gwak J (2019) Multiscale dynamic and static characteristics of gas static thrust bearing with
fusion of multimodal medical images using lifting scheme based micro-hole restrictors. Int J Hydromech 2(3):189–202
biorthogonal wavelet transform. Optik 182:995–1014 Wang K, Zheng M, Wei H, Qi G, Li Y (2020) Multi-modality medi-
Ravi P, Krishnan J (2018) Image enhancement with medical image cal image fusion using convolutional neural network and contrast
fusion using multiresolution discrete cosine transform. In: Mate- pyramid. Sensors 20(8):2169
rials today: proceedings 5 (1, part 1) 1936 – 1942, international Wiens T (2019) Engine speed reduction for hydraulic machinery using
conference on processing of materials, minerals and energy (July predictive algorithms. Int J Hydromech 2(1):16–31
29th–30th) 2016, Ongole, Andhra Pradesh, India Xia K-J, Yin H-S, Wang J-Q (2019) A novel improved deep convo-
Singh D, Kumar V (2018) Dehazing of outdoor images using lutional neural network model for medical image fusion. Clust
notch based integral guided filter. Multimed Tools Appl Comput 22(1):1515–1527
77(20):27363–27386 Xydeas C, Petrovic V (2000) Objective image fusion performance
Singh D, Kumar V (2019a) A comprehensive review of compu- measure. Electron Lett 36(4):308–309
tational dehazing techniques. Arch Comput Methods Eng Zhou T, Ruan S, Canu S (2019) A review: deep learning for medical
26(5):1395–1413 image segmentation using multi-modality fusion. Array 3:100004
Singh D, Kumar V (2019b) Single image defogging by gain gradient Zhu Z, Chai Y, Yin H, Li Y, Liu Z (2016) A novel dictionary learning
image filter. Sci China Inf Sci 62(7):79101 approach for multi-modality medical image fusion. Neurocomput-
Singh D, Kaur M, Singh H (2018) Remote sensing image fusion ing 214:471–482
using fuzzy logic and gyrator transform. Remote Sens Lett Zhu Z, Zheng M, Qi G, Wang D, Xiang Y (2019) A phase congruency
9(10):942–951 and local laplacian energy based multi-modality medical image
Singh D, Kumar V, Kaur M (2019a) Single image dehazing using gra- fusion method in NSCT domain. IEEE Access 7:20811–20824.
dient channel prior. Appl Intell 49(12):4276–4293 https://doi.org/10.1109/ACCESS.2019.2898111
Singh D, Kumar V, Kaur M (2019b) Image dehazing using window- Zhu R, Li X, Zhang X, Ma M (2020) Mri and ct medical image fusion
based integrated means filter. Multimed Tools Appl. https: //doi. based on synchronized-anisotropic diffusion model. IEEE Access
org/10.1007/s11042-019-08286-6 8:91336–91350. https://doi.org/10.1109/ACCESS.2020.2993493
Tavard F, Simon A, Leclercq C, Donal E, Hernández AI, Garreau M
(2014) Multimodal registration and data fusion for cardiac resyn- Publisher’s Note Springer Nature remains neutral with regard to
chronization therapy optimization. IEEE Trans Med Imaging jurisdictional claims in published maps and institutional affiliations.
33(6):1363–1372. https://doi.org/10.1109/TMI.2014.2311694
Ullah H, Ullah B, Wu L, Abdalla FY, Ren G, Zhao Y (2020) Multi-
modality medical images fusion based on local-features fuzzy
13