0% found this document useful (0 votes)
14 views

Anatomical and Functional Medical Image Fusion Using Sparse Representation in NSCT Domain

The document presents a method for fusing anatomical and functional medical images using sparse representation in the nonsubsampled shearlet transform domain. The source images are decomposed using NSST and sparsely represented with learned dictionaries. The coefficients of the fused image are selected using max-l0 norm and the fused image is reconstructed via inverse NSST. Experimental results show the proposed method performs better than state-of-the-art techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Anatomical and Functional Medical Image Fusion Using Sparse Representation in NSCT Domain

The document presents a method for fusing anatomical and functional medical images using sparse representation in the nonsubsampled shearlet transform domain. The source images are decomposed using NSST and sparsely represented with learned dictionaries. The coefficients of the fused image are selected using max-l0 norm and the fused image is reconstructed via inverse NSST. Experimental results show the proposed method performs better than state-of-the-art techniques.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Rev. Téc. Ing. Univ. Zulia. Vol.

39, Nº 7, 380 ‐ 384, 2016

doi:10.21311/001.39.7.47
Anatomical and Functional Medical Image Fusion Using Sparse
Representation in NSST Domain

Lu Tang1, 3, Chuangeng Tian2, Junfeng Hu1, Shikun Zhang1, Jiansheng Qian3*


1
School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
2
School of Information and Electrical Engineering, Xuzhou University of Technology, Xuzhou, China
3
School of Information and Electrical Engineering, China University of Mining and Technology, Xuzhou, China
*Corresponding author (E-mail: [email protected])

Abstract
The fusion of medical images is very useful for biomedical research and clinical diagnosis. An image fusion
technique for Anatomical and functional medical image using sparse representation in NSST Domain is
presented. Firstly, the source images are decomposed by NSST, which are sparsely represented with learned
dictionaries. Then, max-ℓ0 is selected as coefficients of the fused image. Finally, the fused image is
reconstructed by performing inverse NSST. Visual and quantitative analysis of experimental results show that
the proposed scheme is superior to state-of-art schemes.

Key words: Medical Images, Clinical Diagnosis, Sparse Representation, NSST Domain

1. INTRODUCTION
Medical image fusion offers an important approach by integrating complimentary features of different
imaging modalities to acquire a high-quality image. Different medical imaging modalities reflect different
aspects of human body information. Anatomical imaging modalities including computed tomography (CT),
magnetic resonance imaging (MRI), and ultrasonography imaging provide morphologic details of the human
body. Whereas, functional imaging modalities like single photon emission computed tomography (SPECT) and
positron emission computed tomography (PET) provide metabolic information without anatomical context. By
the fusion of functional image with an anatomical image is used in oncology for tumor segmentation and
localization for radiation therapy treatment planning (Paulino et al., 2003).
In the past few years, various medical image fusion algorithms have been proposed. These approaches
include principal component analysis (PCA) (Li et al., 2016), discrete wavelet transform (DWT) (Singh and
Khare, 2013), which do not provide better image fusion performance as they are shift variant transforms, and
some information would be lost, and blocking artifacts would be introduced in the fused image. Nonsubsampled
contourlet transform (NSCT) (Bhatnagar et al., 2013) is able to represent the smoothness along the edges or
contours properly, and it can effectively suppress pseudo-Gibbs phenomena, but the computation is heavy.
Nonsubsampled shearlet transform (NSST) ( Easley et al., 2008) inherits the advantages of shearlet transform
and adds the property of shift-invariant, which has high directional sensitivity and having less computational
complexity compared to NSCT. Therefore, NSST are suitable for medical image fusion application.
Recently, sparse representation (SR) is a powerful tool to describe image, and achieves lots of state of the
art results in various image processing areas. Sparse representation addresses signals natural sparsity, which is
consistent with the physiological characteristics of the human visual system. This inspired us to propose a new
image fusion methodology based on sparse representation in NSST domain. Firstly, the source images are
decomposed into low frequency bands (LFS) and high frequency bands (HFS) in NSST domain, which are then
sparsely represented with learned dictionary. Then, max-ℓ0 is selected as coefficients of the fused image. Finally,
inverse NSCT is conducted to obtain the fused image.

2. RELATED WORKS
2.1. Non-Subsampled Shearlet Transform
The NSST is a multi-scale geometric analysis tool, which has the features of shift-invariance and
excellent anisotropic direction selectivity. NSST include multi-scale partition and direction localization. It
achieves the decomposition process of the NSST by using the Nonsubsampled Laplacian pyramid filter and
shift-invariant shearing filters banks. Nonsubsampled Laplacian pyramid filter ensures the property of
shift-invariance and therefore suppresses the pseudo-Gibbs phenomenon in the fused image. Shift-invariant
shearing filters banks offer the NSST with more precise directional details information. After transformed by
380
Rev. Téc. Ing. Univv. Zulia. Vol. 399, Nº 7, 380 ‐ 384, 2016

NSST, thhe image can be


b decomposeed into one loow frequency bands
b (LFS) and
a a series oof high frequen
ncy bands
(HFS). M
More details off NSST are av
vailable in (W
Wang et al., 201
13).

2.2 Sparsse Representaation


In thhe sparse lineaar model, one patch of an im
mage can be represented
r as a column vecctor v in the dictionary
D. That iis,
v
v=Dx (1)

Where x  R m is callled the sparrse coefficiennts of v according to the over com mplete dictionary. Let
n m
D=(d1 ,d 2 , , d m )  R , each colu
umn of D is aan atom, we say that the diictionary D iss redundant whenw m>n.
The dictiionary D can be constructeed with manyy techniques, such as K-SV VD (Aharon eet al., 2006) anda MOD
(Engan eet al., 2000). TheT learned dictionaries usu
sually have beetter representtation ability tthan the preco
onstructed
ones, so we adopt the learning-baseed approach inn this paper. Assume
A x 0 is
i the numberr of the nonzeero entries
in x, the aabove discusssion can be forrmulated as foollows.
2
min x 0 subject to
o v-Dx 2
 (2)
x

  0 is the error tollerance param meter. Solvingg the sparse representation


r n problems (22) directly is generally
NP-hard.. In this paperr, we choose OMP
O to solvee the problem (Mallat et al., 2006). Wheere . 0 denotes ℓ0-norm
of vectorrs, which is thee sum of abso
olute values off all elements.

Figure 1.Scchematic diag


gram of the prooposed imagee fusion framework in YUV
V color space

POSED RULE
3. PROP E
Funnctional imagee is a low-reso olution pseudoo color imagee in which collor of the imaage holds the most m vital
informatiion such as metabolic
m actiivity or bloodd flow etc., depending
d on the organ beeing imaged. However,
anatomiccal image is a high resolutio on grayscale iimage that giv ves the structuural informatiion. Fusion off gray and
pseudo ccolor medical images presen nts more inforrmation of bio ological tissuees in a single image. YUV is a color
space typpically used ass part of a coloor image pipeeline. It is insp
pired by the coolor opponenccy theory in ph hysiology,
and encoodes a color immage or video o taking into aaccount humaan perception of achromatiic and chromaatic colors
occurringg in three inddependent dim mensions (Sheenet al., 2013)). The schemaatic diagram of the propossed image
fusion fraamework is deepicted in Figure 1. The dettailed fusion scheme
s is sum
mmarized as foollows:
Firstly, Functionaal image is con nverted gray sscale image by b discard the U and V com mponents. The core idea
is to trannsform the coolor image fro om the RGB color space to t the YUV colorc space byy discard the U and V
componeents. The RGB B to YUV colo or space conveersion can be summarized as a follows:

381
Rev. Téc. Ing. Univv. Zulia. Vol. 399, Nº 7, 380 ‐ 384, 2016

 Y   0.2299 0.587 0.114   R 


U    0..169 0.331 0.5  G  (3)
  
V   0.5
0 0.419 0.081  B 

Thhe converted gray


g scale im
mage and the aachromatic ch
hannel Y of the
t color imaage are fused using the
proposedd fusion algoriithm.

a results: (aa1)-(a4) MRI images. (b1)--(b2) PET


Figure 22. Group 1 thrrough Group 4 source meddical images and
images. ((b3)-(b4) SPE
ECT images. (c1)-(c4)
( by G
GFF method, (d1)-(d4) by NSCT-PCNN N-SF method,, (e1)-(e4)
by MSTS SR method, (ff1)-(f4) by pro
oposed methodd.

Seccondly, the twwo source imaages A and coonverted gray y scale image B are represeented in low-frequency
sub-bandds (LFS) {AL,, BL} and high h-frequency (H HF) sub-band
ds {AH, BH} by b using NSSST.
Thenn, apply the sliding window w technique too divide AL, BL,
B AH, BH into image pattches. For each position
p of patchhes, the pixel values of eveery patch are llexicographic ordered into a column vecttor {v AL
p
, vBL
p
, v AH
p
, vBH
p
}.
Calculatee the sparse coefficient vecctors { AL
p
,  BL
p
,  AH
p
,  BH
p
} off {v AL
p
, vBL
p
, v AH
p p
OMP algorith
H , vBH } using O hm by Eq.
(2), get  0AL ,  BL
0 ,  0 ,  0 }.Fuse the coefficients
AH B
BH
c off the LFS and HFS by the following
f maxx--ℓ0 fusion rule:

max( AL
p
,  BL
p
)  0AL   BL
0
 p
FL   AL 0  0
AL BL
(4)
 p
 BL  0AL   BL
0

max( AH
p
,  BHp
)  0AH   BH
0
H

 p
FH   AL  0AH
A
  BH
0 (5)
 p

 BL  AH
A
0   BH
0

Next,, perform the inverse


i NSCT
T over FL and FH to reconstrruct the final fused
f gray scaale image FY-channel.
382
Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 380 ‐ 384, 2016

Finally, FY-channel followed by the inverse YUV to RGB conversion to get the final fused image F. YUV to
RGB space is done by the following inverse operations:

 R  1 0 1.14   Y 
G   1 0.39 0.58 U  (6)
    
 B  1 2.03 0  V 

4. EXPERIMENTS RESULT ANALYSIS


4.1 Experimental Settings
To demonstrate the significant performance of the proposed scheme, more elaborate performance
comparison analysis was done with brain images in group 1 through 4. The experiments on four pairs of medical
images, they can be downloaded from http://www.med.harvard.edu/aanlib/home.html.In the experiments, the
proposed method is compared with following fusion algorithms, including image fusion with guided filtering
(GFF) (Li et al., 2013), NSCT-based multimodal medical image fusion using pulse-coupled neural network and
modified spatial frequency (NSCT-PCNN-SF) (Das and Kundu, 2012), a general framework for image fusion
based on multi-scale transform and sparse representation (MSTSR) (Liu et al., 2015).

4.2 Results and Analysis


Figure 2 shows the fused images by the proposed method and the other three methods. From the fusion
results in Fig. 2(c1)- 2(c4) show GFF cannot fuse these types images well; The results of NSCT-PCNN-SF
method may decrease the contrast of soft tissue structures, thus make some details blur or invisible especially in
Fig. 2(d3)-2(d4). Any change in the color of fused image compared to functional image is known as spectral
distortion and loss of anatomical details is known as spatial distortion. Spectral distortion of functional image in
the fused images by MSTSR methods in Fig. 2(e1)- 2(e4). Color changes made in intensity, the component
during fusion process affects the spectral content. Comparing the fused images, it is clear that the images fused
by our proposed algorithm not only preserves the metabolic activity in the functional scan but also reveals the
anatomical structures in the MRI scan, which is useful in diagnoses for doctors.
Table 1. Objective criteria on the multimodal medical images fusion results
Source Images metric GFF NSCT-PCNN-SF MSTSR Ours

MI 0.7222 0.6175 0.7842 0.8072


Group 1 SD 48.3232 50.9633 53.6535 55.1123
SF 6.4960 6.4673 6.4589 6.6636
MI 0.6649 0.5222 0.6342 0.6586
Group 2 SD 46.4142 54.1663 55.8167 56.3435
SF 6.2963 6.7683 6.7111 7.1087
MI 0.7910 0.5627 0.5666 0.8181
Group 3 SD 59.1205 60.4833 52.4274 61.6755
SF 6.8000 6.8670 6.8719 7.0286
MI 0.6728 0.3651 0.4750 0.7738
Group 4 SD 49.5860 46.0797 43.3550 55.5014
SF 6.4308 6.4516 6.3875 6.7643
For further comparison except for the visual observation above, the objective performance evaluation is
necessary to distinguish different qualities of the fused medical image. Five popular metrics, i.e., mutual
information (MI) (Hossny et al., 2008), standard deviation (SD) ( Liu et al., 2015), spatial frequency(SF)
(Ganasala et al., 2014). MI indicates the amount of information contained in the fused image about the source
images. SD is mainly used to measure the overall contrast of the fused image. SF indicates the overall activity of
fused image. In general, the larger the values of MI, SD, and SF indicate better fusion quality. Table 1
summarizes the simulation results of the proposed method and three state-of-the-art methods, where the best
result is marked in boldface. It is known from Table 1 that the proposed method produces the best results in all
groups. Thus, the proposed algorithm can adjust to anatomical and functional medical image, which is useful for
doctors and their diagnoses.

383
Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 380 ‐ 384, 2016

5. CONCLUSIONS
In this paper, we have presented a new image fusion methodology based on sparse representation (SR) in
NSST domain. The multi-scale and multi-directional properties of NSST along with SR are used to preserve
more useful information and improve the quality of the fused images. Experimental results clearly demonstrate
that the proposed algorithm outperforms the state-of-the-art fusion methods in terms of both the subjective and
objective performance valuation.

ACKNOWLEDGMENT
This work was supported by the National Natural Science Foundation of China (61379143), the Xu Zhou
Science and technology Program (KC14SH078, KC15SH019).

REFERENCES
Paulino, A. C., Thorstad, W. L. and Fox, T. (2003) “Role of fusion in radiotherapy treatment planning”,
Seminars in Nuclear Medicine, 33, pp. 238-243.
Cuifeng Li, Huajie Ye, Jianbo Ye (2016) “Image fusion based on curvelet transform and principal component
analysis”, Revista Tecnica de la Facultad de Ingenieria Universidad del Zulia, 39(1), pp. 392-396.
Singh, R., Khare, A. (2013) “Multiscale medical image fusion in wavelet domain”, The Scientific World Journal,
2013, pp. 1-10.
Bhatnagar, G., Jonathan Wu, Q.M., Liu, Z. (2013) “Directive contrast based multimodal medical image fusion in
NSCT domain”, IEEE Transactions on Multimedia, 15, pp. 1014-1024.
Easley, G., Labate, D., Lim, W. Q. (2008) “Sparse directional image representations using the discrete shearlet
transform”, Applied and Computational Harmonic Analysis, 25, pp. 25-46,
Wang, Q.L. (2013) “Nonseparable shearlet transform”, IEEE Transactions on Image Process, 22(5),
pp.2056-2065.
Aharon, M., Elad, M., Bruckstein, A. (2006) “K-svd: An Algorithm for Designing Over complete Dictionaries
for Sparse Representation”, IEEE Transactions on Signal Process, 54(11), pp. 4311-4322.
Engan, K., Aase, S. O., Husoy, J. H.(2000) “Multi-frame Compression: Theory and Design”, Signal Process,
80(10), pp. 2121-2140.
Mallat, S., Zhang, Z. (2006) “Matching Pursuits with Time-frequency Dictionaries”, IEEE Transactions on
Signal Process, 41(12), pp. 3397-3415.
Shen, R., Cheng, I., and Basu, A. (2013) “Cross-scale coefficient selection for volumetric medical image fusion”,
IEEE Transactions on Biomedical Engineering, 60(4), pp. 1069-1079.
Li, S., Kang, X., Hu, J. (2013) “Image Fusion with Guided Filtering”, IEEE Transactions on Image Processing,
22(7), pp. 2864-2875.
Kundu, S. Das M. K. (2012) “NSCT-based Multimodal Medical Image Fusion Using Pulse-coupled Neural
Network and Modified Spatial Frequency”, Medical and Biological Engineering and Computing, 50, pp.
1105-1114.
Liu, Y., Liu, S., Wang, Z. (2015) “A General Framework for Image Fusion Based on Multi-scale Transform and
Sparse Representation”, Information Fusion, 24, pp. 147-164.
Hossny, M., Nahavandi, S., and Creighton, D. (2008) “Comments on ‘information measure for performance of
image fusion”, Electronics Letters, 44(18), pp. 1066-1067.
Ganasala V. Kumar. (2014) “CT and MR Image Fusion Scheme in Nonsubsampled Contourlet Transform
Domain”, Journal of Digital Imaging, 27, pp. 407-418.

384

You might also like