0% found this document useful (0 votes)
3 views

Fast-FMI

Uploaded by

hau trinhvan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Fast-FMI

Uploaded by

hau trinhvan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Fast-FMI: non-reference image fusion metric

Mohammad Haghighat Masoud Amirkabiri Razian


Department of Electrical and Computer Engineering Department of Electrical and Computer Engineering
University of Miami University of Tabriz
Coral Gables, FL, USA Tabriz, Iran
Email: [email protected] Email: [email protected]

Abstract—In this paper, we present a non-reference im- Section III verifies the proposed algorithm and compares it
age fusion metric based on the mutual information of image with other well-known metrics. Finally, Section IV concludes
features. Whereas a recent metric proposed by the author the paper.
called FMI achieves such a goal, the algorithm is complex
and has high memory requirements for its calculations. This
paper shows how to modify the model of FMI, and proposes
a faster algorithm to achieve similar results. The new algo- II. P ROPOSED M ETHOD
rithm achieves a significant complexity reduction in compar-
ison to the previous model. Various experiments prove the Proposed algorithm utilizes the mutual information to
efficiency of the algorithm in consistency with the subjective measure the similarity of two variables. MI is a measure
criteria. Matlab source code for this metric is provided at of the amount of information one random variable contains
http://www.mathworks.com/matlabcentral/fileexchange/45926. about another. It measures the degree of dependency between
two variables X and Y, by measuring the Kullback-Leibler
divergence between the joint distribution, p(x, y), and the
I. I NTRODUCTION distribution associated with the case of complete independence,
Image fusion is the process of combining multiple source p(x).p(y) [6]. It is defined as:
images into a single image, which contains a more informative p(x, y)
and descriptive scene than any of the source images individu- I(X;Y ) = ∑ p(x, y) (1)
xy p(x).p(y)
ally. Image fusion not only decreases the amount of stored or
transmitted data, but also helps to create new images that are
more informative and suitable for both visual perception and Fig. 1 illustrates the overall framework of the FMI calcu-
further computer processing [1], [2]. The widespread applica- lation. Denoting the source images as A and B, and the fused
tions of image fusion has made computer vision scientists to image as F, FMI metric [5] extracts the feature image of the
propose different methods of image fusion. Therefore, there is source and fused images using a feature extraction method,
a need for having a metric to evaluate these algorithms. e.g., gradient. After feature extraction, the feature images are
normalized to create the marginal probability density functions
The problem with quality assessment of fused images is (MPDFs), p(a), p(b), and p( f ). However, the calculation
that there is usually no ground truth to compare the fused im- of the joint probability density function (JPDF), p(a, f ) and
ages with and use the reference-based image quality measures. p(b, f ), is not as straightforward as the joint histogram used in
Therefore, there is a need to have a metric that can measure histogram-based MI metrics. Nelsen’s JPDF estimation method
the amount of information conducted from source images to [7] is employed to estimate the p(a, f ) from p(a) and p( f ).
the fused image. This algorithm is described in details in [5].
Mutual information (MI) helps us measure the amount of
information that the fused image contains about the source The complexity of the JPDF estimation is quadratically
images. This is the motivation for several MI-based fusion related to the size of the MPDFs. Assuming the size of the
metrics. Previously proposed MI-based metrics [3], [4] make image is M × N, in the previous FMI method [5], the size of
use of the normalized histograms of the images as probability the MPDF is the same as the number of elements in the feature
densities. However, histograms do not have enough informa- image, which is n = M ×N. Consequently, the size of the JPDF
tion about the region-based details and the quality of the is n2 which has order of n2 time complexity to calculate. In
images. Fast-FMI, we propose to use small corresponding windows
between source and fused images and measure the regional
FMI metric [5], on the other hand, calculates the mutual FMIs. The window slides over the image and calculates the
information of the image features. This method outperforms regional FMIs all over the image.
the other metrics in consistency with the subjective quality
measures. However, the algorithm is complex and requires high Fast-FMI calculates the regional mutual information be-
amount of memory. In this paper, we propose a new method tween corresponding windows in fused image and the two
of FMI calculation that significantly decreases the complexity source images, Ii (A; F) and Ii (B; F). The mutual information
and makes it more suitable for real-time applications. Ii (A; F) is normalized by dividing it by
The rest of this paper is organised as follows: Section Hi (A) + Hi (F)
(2)
II describes the steps of the Fast-FMI algorithm in details. 2
feature p(a)
Image
A(source) extraction mutual
JPDF p(f,a)
I(F;A)
estimation information

Image feature p(f) FMI AB


fusion F(fused) extraction F

JPDF p(f,b) mutual I(F;B)


estimation information
Image feature
B(source)
extraction p(b)

Fig. 1. Overall Framework of the FMI metric

where Hi (A) and Hi (F) are the entropies of the corresponding


windows from the two images [8]. Here, the average of all
the normalized regional FMIs is considered as the mutual
information between two images. MIs between the two source
images, A and B, and the fused image, F, are calculated as:
2 n Ii (A; F)
I(A; F) = ∑
n i=1 Hi (A) + Hi (F)
(3)

2 n Ii (B; F)
I(B; F) = ∑
n i=1 Hi (B) + Hi (F)
. (4)

Finally, the average of the above MIs is considered as the


image fusion metric that measures the amount of information Fig. 2. “TNO UN Camp” image set, source images and fusion results: (a)
IR image, (b) Visual image, (c) RP, (d) DWT, (e) SIDWT, (f) LP
conducted from the source images to the fused image.
1
FMIFAB = (I(A; F) + I(B; F)) (5)
2 TABLE I. O BJECTIVE PERFORMANCE OF THE IMAGE FUSION
ALGORITHMS ON “UN C AMP ” SET
By substituting Equations 3 and 4 into 5, we obtain the Metric Piella Petrovic FMI Fast-FMI
overall equation for FMI as: RP 0.652567 0.520748 0.398804 0.394462
DWT 0.647161 0.499502 0.402932 0.402678
  SIDWT 0.675360 0.544596 0.433452 0.419321
1 n Ii (A; F) Ii (B; F)
FMIFAB = ∑ Hi (A) + Hi (F) Hi (B) + Hi (F)
n i=1
+ (6) LP 0.697776 0.570685 0.434329 0.435068

The abovementioned fusion methods are applied to three


III. E XPERIMENTAL R ESULTS different pairs of images used in different applications. These
Numerous experiments prove that Fast-FMI is consistent image pairs are: “TNO UN Camp” image set including visual
with the previously proposed FMI metric, which already has and infrared images as an example of a multimodal surveil-
a proven coherency with the subjective evaluations. In this lance application, “Pepsi” image set including multi-focus im-
section, we provide the experimental results of the proposed ages as an example from visual sensor networks applications or
Fast-FMI metric along with the FMI method proposed in [5], digital camera vision, and “Medical” set containing MRI and
Piella [9] and Petrovic [10] metrics. CT images as an example from a medical imaging application.
All these images are publicly available at the Image Fusion
We use some well-known fusion methods whose perfor- web site [15]. Figures 2, 3 and 4 show these images along with
mances are subjectively evaluated in the literature. These the fusion results of the aforementioned algorithms. Objective
methods in the order of better performance are as follows: evaluation results are presented in Tables I, II and III. The
Laplacian pyramid (LP) [11], shift invariant discrete wavelet proposed Fast-FMI metric outperforms the well-known Piella
transform (SIDWT) [12], discrete wavelet transform (DWT) [9] and Petrovic [10] metrics in consistency with the subjective
[13], and ratio pyramid (RP) [14]. Perceptual comparisons in results. It gives higher values by the order of better efficiency,
Figures 2, 4 and 3, as well as the subjective evaluations verify which is consistent with the subjective evaluations and the FMI
the performance order [5], [12]. metric proposed in [5].
Fig. 3. “Pepsi” multi-focus image set, source images and fusion results: (a) Fig. 4. “Medical” image set, source images and fusion results: (a) CT image,
Focus on right, (b) Focus on Left, (c) RP, (d) DWT, (e) SIDWT, (f) LP (b) MRI image, (c) RP, (d) DWT, (e) SIDWT, (f) LP

TABLE II. O BJECTIVE PERFORMANCE OF THE IMAGE FUSION TABLE III. O BJECTIVE PERFORMANCE OF THE IMAGE FUSION
ALGORITHMS ON “P EPSI ” SET ALGORITHMS ON “M EDICAL ” SET

Metric Piella Petrovic FMI Fast-FMI Metric Piella Petrovic FMI Fast-FMI
RP 0.801862 0.605995 0.807054 0.627455 RP 0.493197 0.481256 0.319465 0.352471
DWT 0.939113 0.732229 0.868535 0.638274 DWT 0.615847 0.473003 0.321802 0.372243
SIDWT 0.943491 0.749176 0.870496 0.649482 SIDWT 0.653120 0.600056 0.349273 0.398720
LP 0.944896 0.767344 0.874230 0.650393 LP 0.680152 0.624309 0.350625 0.454737

From the complexity point of view, using a 3 × 3 window, [5] Haghighat, M.B.A., Aghagolzadeh, A., and Seyedarabi, H.: ‘A non-
the size of the JPDF in each block is (3 × 3)2 . Having a sliding reference image fusion metric based on mutual information of image
features’, Computers & Electrical Engineering, 2011, 37, pp. 744-756
window, the number of regional FMIs is n = M × N. Therefore,
[6] Cover, T.M. and Thomas, J.A.: ‘Elements of Information Theory’, Wiley
the total number of calculations is only 81 × n, which is 81/n Series in Telecommunications and Signal Processing, 2006
times of the previous method. Here the complexity has changed
[7] Nelsen, R.B.: ‘Discrete bivariate distributions with given marginals
from O(n2 ) to O(n), which is a significant reduction. and correlation: Discrete bivariate distributions’, Communications in
Statistics-Simulation and Computation, 1987, 16, pp. 199-208
[8] Hossny, M., Nahavandi, S. and Creighton, D.: ‘Comments on ‘infor-
IV. C ONCLUSION mation measure for performance of image fusion”, Electronics Letters,
2008, 44, pp. 1066-1067
In this paper, a new model for calculating the feature [9] Piella, G. and Heijmans, H.: ‘A new quality metric for image fusion’,
mutual information metric is presented. The new Fast-FMI Proceedings of International Conference on Image Processing (ICIP),
method is not only consistent with the previous FMI metric 2003, 3, pp. 173-176
and subjective results, but also significantly reduces the time [10] Xydeas, C.S. and Petrovic, V.: ‘Objective image fusion performance
complexity of the algorithm from quadratic, i.e., O(n2 ), to measure’, Electronics Letters, 2000, 36, pp. 308-309
linear, i.e., O(n), complexity. Since the new algorithm is [11] Zhang, Z. and Blum, R.S.: ‘A categorization of multiscale-
real-time, it can be used for optimization purposes. Various decomposition-based image fusion schemes with a performance study
for a digital camera application’. Proceedings of the IEEE, 1999, 87(8),
experiments verify the performance of the proposed algorithm 1315-1326.
in fusion quality assessment. Furthermore, the reduction in [12] Rockinger O.: ‘Image sequence fusion using a shift-invariant wavelet
complexity is mathematically proven. transform’, International Conference on Image Processing, 1997, 3, pp.
288-291
R EFERENCES [13] Li H., Manjunath B.S., Mitra S.K.: ‘Multisensor image fusion using
the wavelet transform’, Graphic Models Image Processing, 1995, 57,
[1] Haghighat, M.B.A., Aghagolzadeh, A., and Seyedarabi, H.: ‘Multi-focus pp. 235-245
image fusion for visual sensor networks in DCT domain’, Computers &
[14] Toet, A.: ‘Image fusion by a ratio of low-pass pyramid’, Pattern
Electrical Engineering, 2011, 37, pp. 789-797
Recognition Letters, 1989, 9 pp. 245-253
[2] Haghighat, M.B.A., Aghagolzadeh, A., and Seyedarabi, H.: ‘Real-time
[15] http://www.imagefusion.org
fusion of multi-focus images for visual sensor networks’, 6th Iranian
Machine Vision and Image Processing (MVIP), 2010, pp. 1-6
[3] Qu, G., Zhang, D. and Yan, P.: ‘Information measure for performance
of image fusion’, Electronics letters, 2002, 38, pp. 313-315
[4] Cvejic, N., Canagarajah, C.N. and Bull, D.R.: ‘Image fusion metric based
on mutual information and Tsallis entropy’, Electronics Letters, 2006, 42,
pp. 626-627

You might also like