Anatomical and Functional Medical Image Fusion Using Sparse Representation in NSCT Domain
Anatomical and Functional Medical Image Fusion Using Sparse Representation in NSCT Domain
doi:10.21311/001.39.7.47
Anatomical and Functional Medical Image Fusion Using Sparse
Representation in NSST Domain
Abstract
The fusion of medical images is very useful for biomedical research and clinical diagnosis. An image fusion
technique for Anatomical and functional medical image using sparse representation in NSST Domain is
presented. Firstly, the source images are decomposed by NSST, which are sparsely represented with learned
dictionaries. Then, max-ℓ0 is selected as coefficients of the fused image. Finally, the fused image is
reconstructed by performing inverse NSST. Visual and quantitative analysis of experimental results show that
the proposed scheme is superior to state-of-art schemes.
Key words: Medical Images, Clinical Diagnosis, Sparse Representation, NSST Domain
1. INTRODUCTION
Medical image fusion offers an important approach by integrating complimentary features of different
imaging modalities to acquire a high-quality image. Different medical imaging modalities reflect different
aspects of human body information. Anatomical imaging modalities including computed tomography (CT),
magnetic resonance imaging (MRI), and ultrasonography imaging provide morphologic details of the human
body. Whereas, functional imaging modalities like single photon emission computed tomography (SPECT) and
positron emission computed tomography (PET) provide metabolic information without anatomical context. By
the fusion of functional image with an anatomical image is used in oncology for tumor segmentation and
localization for radiation therapy treatment planning (Paulino et al., 2003).
In the past few years, various medical image fusion algorithms have been proposed. These approaches
include principal component analysis (PCA) (Li et al., 2016), discrete wavelet transform (DWT) (Singh and
Khare, 2013), which do not provide better image fusion performance as they are shift variant transforms, and
some information would be lost, and blocking artifacts would be introduced in the fused image. Nonsubsampled
contourlet transform (NSCT) (Bhatnagar et al., 2013) is able to represent the smoothness along the edges or
contours properly, and it can effectively suppress pseudo-Gibbs phenomena, but the computation is heavy.
Nonsubsampled shearlet transform (NSST) ( Easley et al., 2008) inherits the advantages of shearlet transform
and adds the property of shift-invariant, which has high directional sensitivity and having less computational
complexity compared to NSCT. Therefore, NSST are suitable for medical image fusion application.
Recently, sparse representation (SR) is a powerful tool to describe image, and achieves lots of state of the
art results in various image processing areas. Sparse representation addresses signals natural sparsity, which is
consistent with the physiological characteristics of the human visual system. This inspired us to propose a new
image fusion methodology based on sparse representation in NSST domain. Firstly, the source images are
decomposed into low frequency bands (LFS) and high frequency bands (HFS) in NSST domain, which are then
sparsely represented with learned dictionary. Then, max-ℓ0 is selected as coefficients of the fused image. Finally,
inverse NSCT is conducted to obtain the fused image.
2. RELATED WORKS
2.1. Non-Subsampled Shearlet Transform
The NSST is a multi-scale geometric analysis tool, which has the features of shift-invariance and
excellent anisotropic direction selectivity. NSST include multi-scale partition and direction localization. It
achieves the decomposition process of the NSST by using the Nonsubsampled Laplacian pyramid filter and
shift-invariant shearing filters banks. Nonsubsampled Laplacian pyramid filter ensures the property of
shift-invariance and therefore suppresses the pseudo-Gibbs phenomenon in the fused image. Shift-invariant
shearing filters banks offer the NSST with more precise directional details information. After transformed by
380
Rev. Téc. Ing. Univv. Zulia. Vol. 399, Nº 7, 380 ‐ 384, 2016
Where x R m is callled the sparrse coefficiennts of v according to the over com mplete dictionary. Let
n m
D=(d1 ,d 2 , , d m ) R , each colu
umn of D is aan atom, we say that the diictionary D iss redundant whenw m>n.
The dictiionary D can be constructeed with manyy techniques, such as K-SV VD (Aharon eet al., 2006) anda MOD
(Engan eet al., 2000). TheT learned dictionaries usu
sually have beetter representtation ability tthan the preco
onstructed
ones, so we adopt the learning-baseed approach inn this paper. Assume
A x 0 is
i the numberr of the nonzeero entries
in x, the aabove discusssion can be forrmulated as foollows.
2
min x 0 subject to
o v-Dx 2
(2)
x
POSED RULE
3. PROP E
Funnctional imagee is a low-reso olution pseudoo color imagee in which collor of the imaage holds the most m vital
informatiion such as metabolic
m actiivity or bloodd flow etc., depending
d on the organ beeing imaged. However,
anatomiccal image is a high resolutio on grayscale iimage that giv ves the structuural informatiion. Fusion off gray and
pseudo ccolor medical images presen nts more inforrmation of bio ological tissuees in a single image. YUV is a color
space typpically used ass part of a coloor image pipeeline. It is insp
pired by the coolor opponenccy theory in ph hysiology,
and encoodes a color immage or video o taking into aaccount humaan perception of achromatiic and chromaatic colors
occurringg in three inddependent dim mensions (Sheenet al., 2013)). The schemaatic diagram of the propossed image
fusion fraamework is deepicted in Figure 1. The dettailed fusion scheme
s is sum
mmarized as foollows:
Firstly, Functionaal image is con nverted gray sscale image by b discard the U and V com mponents. The core idea
is to trannsform the coolor image fro om the RGB color space to t the YUV colorc space byy discard the U and V
componeents. The RGB B to YUV colo or space conveersion can be summarized as a follows:
381
Rev. Téc. Ing. Univv. Zulia. Vol. 399, Nº 7, 380 ‐ 384, 2016
Seccondly, the twwo source imaages A and coonverted gray y scale image B are represeented in low-frequency
sub-bandds (LFS) {AL,, BL} and high h-frequency (H HF) sub-band
ds {AH, BH} by b using NSSST.
Thenn, apply the sliding window w technique too divide AL, BL,
B AH, BH into image pattches. For each position
p of patchhes, the pixel values of eveery patch are llexicographic ordered into a column vecttor {v AL
p
, vBL
p
, v AH
p
, vBH
p
}.
Calculatee the sparse coefficient vecctors { AL
p
, BL
p
, AH
p
, BH
p
} off {v AL
p
, vBL
p
, v AH
p p
OMP algorith
H , vBH } using O hm by Eq.
(2), get 0AL , BL
0 , 0 , 0 }.Fuse the coefficients
AH B
BH
c off the LFS and HFS by the following
f maxx--ℓ0 fusion rule:
max( AL
p
, BL
p
) 0AL BL
0
p
FL AL 0 0
AL BL
(4)
p
BL 0AL BL
0
max( AH
p
, BHp
) 0AH BH
0
H
p
FH AL 0AH
A
BH
0 (5)
p
BL AH
A
0 BH
0
Finally, FY-channel followed by the inverse YUV to RGB conversion to get the final fused image F. YUV to
RGB space is done by the following inverse operations:
R 1 0 1.14 Y
G 1 0.39 0.58 U (6)
B 1 2.03 0 V
383
Rev. Téc. Ing. Univ. Zulia. Vol. 39, Nº 7, 380 ‐ 384, 2016
5. CONCLUSIONS
In this paper, we have presented a new image fusion methodology based on sparse representation (SR) in
NSST domain. The multi-scale and multi-directional properties of NSST along with SR are used to preserve
more useful information and improve the quality of the fused images. Experimental results clearly demonstrate
that the proposed algorithm outperforms the state-of-the-art fusion methods in terms of both the subjective and
objective performance valuation.
ACKNOWLEDGMENT
This work was supported by the National Natural Science Foundation of China (61379143), the Xu Zhou
Science and technology Program (KC14SH078, KC15SH019).
REFERENCES
Paulino, A. C., Thorstad, W. L. and Fox, T. (2003) “Role of fusion in radiotherapy treatment planning”,
Seminars in Nuclear Medicine, 33, pp. 238-243.
Cuifeng Li, Huajie Ye, Jianbo Ye (2016) “Image fusion based on curvelet transform and principal component
analysis”, Revista Tecnica de la Facultad de Ingenieria Universidad del Zulia, 39(1), pp. 392-396.
Singh, R., Khare, A. (2013) “Multiscale medical image fusion in wavelet domain”, The Scientific World Journal,
2013, pp. 1-10.
Bhatnagar, G., Jonathan Wu, Q.M., Liu, Z. (2013) “Directive contrast based multimodal medical image fusion in
NSCT domain”, IEEE Transactions on Multimedia, 15, pp. 1014-1024.
Easley, G., Labate, D., Lim, W. Q. (2008) “Sparse directional image representations using the discrete shearlet
transform”, Applied and Computational Harmonic Analysis, 25, pp. 25-46,
Wang, Q.L. (2013) “Nonseparable shearlet transform”, IEEE Transactions on Image Process, 22(5),
pp.2056-2065.
Aharon, M., Elad, M., Bruckstein, A. (2006) “K-svd: An Algorithm for Designing Over complete Dictionaries
for Sparse Representation”, IEEE Transactions on Signal Process, 54(11), pp. 4311-4322.
Engan, K., Aase, S. O., Husoy, J. H.(2000) “Multi-frame Compression: Theory and Design”, Signal Process,
80(10), pp. 2121-2140.
Mallat, S., Zhang, Z. (2006) “Matching Pursuits with Time-frequency Dictionaries”, IEEE Transactions on
Signal Process, 41(12), pp. 3397-3415.
Shen, R., Cheng, I., and Basu, A. (2013) “Cross-scale coefficient selection for volumetric medical image fusion”,
IEEE Transactions on Biomedical Engineering, 60(4), pp. 1069-1079.
Li, S., Kang, X., Hu, J. (2013) “Image Fusion with Guided Filtering”, IEEE Transactions on Image Processing,
22(7), pp. 2864-2875.
Kundu, S. Das M. K. (2012) “NSCT-based Multimodal Medical Image Fusion Using Pulse-coupled Neural
Network and Modified Spatial Frequency”, Medical and Biological Engineering and Computing, 50, pp.
1105-1114.
Liu, Y., Liu, S., Wang, Z. (2015) “A General Framework for Image Fusion Based on Multi-scale Transform and
Sparse Representation”, Information Fusion, 24, pp. 147-164.
Hossny, M., Nahavandi, S., and Creighton, D. (2008) “Comments on ‘information measure for performance of
image fusion”, Electronics Letters, 44(18), pp. 1066-1067.
Ganasala V. Kumar. (2014) “CT and MR Image Fusion Scheme in Nonsubsampled Contourlet Transform
Domain”, Journal of Digital Imaging, 27, pp. 407-418.
384