Fast-FMI
Fast-FMI
Abstract—In this paper, we present a non-reference im- Section III verifies the proposed algorithm and compares it
age fusion metric based on the mutual information of image with other well-known metrics. Finally, Section IV concludes
features. Whereas a recent metric proposed by the author the paper.
called FMI achieves such a goal, the algorithm is complex
and has high memory requirements for its calculations. This
paper shows how to modify the model of FMI, and proposes
a faster algorithm to achieve similar results. The new algo- II. P ROPOSED M ETHOD
rithm achieves a significant complexity reduction in compar-
ison to the previous model. Various experiments prove the Proposed algorithm utilizes the mutual information to
efficiency of the algorithm in consistency with the subjective measure the similarity of two variables. MI is a measure
criteria. Matlab source code for this metric is provided at of the amount of information one random variable contains
http://www.mathworks.com/matlabcentral/fileexchange/45926. about another. It measures the degree of dependency between
two variables X and Y, by measuring the Kullback-Leibler
divergence between the joint distribution, p(x, y), and the
I. I NTRODUCTION distribution associated with the case of complete independence,
Image fusion is the process of combining multiple source p(x).p(y) [6]. It is defined as:
images into a single image, which contains a more informative p(x, y)
and descriptive scene than any of the source images individu- I(X;Y ) = ∑ p(x, y) (1)
xy p(x).p(y)
ally. Image fusion not only decreases the amount of stored or
transmitted data, but also helps to create new images that are
more informative and suitable for both visual perception and Fig. 1 illustrates the overall framework of the FMI calcu-
further computer processing [1], [2]. The widespread applica- lation. Denoting the source images as A and B, and the fused
tions of image fusion has made computer vision scientists to image as F, FMI metric [5] extracts the feature image of the
propose different methods of image fusion. Therefore, there is source and fused images using a feature extraction method,
a need for having a metric to evaluate these algorithms. e.g., gradient. After feature extraction, the feature images are
normalized to create the marginal probability density functions
The problem with quality assessment of fused images is (MPDFs), p(a), p(b), and p( f ). However, the calculation
that there is usually no ground truth to compare the fused im- of the joint probability density function (JPDF), p(a, f ) and
ages with and use the reference-based image quality measures. p(b, f ), is not as straightforward as the joint histogram used in
Therefore, there is a need to have a metric that can measure histogram-based MI metrics. Nelsen’s JPDF estimation method
the amount of information conducted from source images to [7] is employed to estimate the p(a, f ) from p(a) and p( f ).
the fused image. This algorithm is described in details in [5].
Mutual information (MI) helps us measure the amount of
information that the fused image contains about the source The complexity of the JPDF estimation is quadratically
images. This is the motivation for several MI-based fusion related to the size of the MPDFs. Assuming the size of the
metrics. Previously proposed MI-based metrics [3], [4] make image is M × N, in the previous FMI method [5], the size of
use of the normalized histograms of the images as probability the MPDF is the same as the number of elements in the feature
densities. However, histograms do not have enough informa- image, which is n = M ×N. Consequently, the size of the JPDF
tion about the region-based details and the quality of the is n2 which has order of n2 time complexity to calculate. In
images. Fast-FMI, we propose to use small corresponding windows
between source and fused images and measure the regional
FMI metric [5], on the other hand, calculates the mutual FMIs. The window slides over the image and calculates the
information of the image features. This method outperforms regional FMIs all over the image.
the other metrics in consistency with the subjective quality
measures. However, the algorithm is complex and requires high Fast-FMI calculates the regional mutual information be-
amount of memory. In this paper, we propose a new method tween corresponding windows in fused image and the two
of FMI calculation that significantly decreases the complexity source images, Ii (A; F) and Ii (B; F). The mutual information
and makes it more suitable for real-time applications. Ii (A; F) is normalized by dividing it by
The rest of this paper is organised as follows: Section Hi (A) + Hi (F)
(2)
II describes the steps of the Fast-FMI algorithm in details. 2
feature p(a)
Image
A(source) extraction mutual
JPDF p(f,a)
I(F;A)
estimation information
2 n Ii (B; F)
I(B; F) = ∑
n i=1 Hi (B) + Hi (F)
. (4)
TABLE II. O BJECTIVE PERFORMANCE OF THE IMAGE FUSION TABLE III. O BJECTIVE PERFORMANCE OF THE IMAGE FUSION
ALGORITHMS ON “P EPSI ” SET ALGORITHMS ON “M EDICAL ” SET
Metric Piella Petrovic FMI Fast-FMI Metric Piella Petrovic FMI Fast-FMI
RP 0.801862 0.605995 0.807054 0.627455 RP 0.493197 0.481256 0.319465 0.352471
DWT 0.939113 0.732229 0.868535 0.638274 DWT 0.615847 0.473003 0.321802 0.372243
SIDWT 0.943491 0.749176 0.870496 0.649482 SIDWT 0.653120 0.600056 0.349273 0.398720
LP 0.944896 0.767344 0.874230 0.650393 LP 0.680152 0.624309 0.350625 0.454737
From the complexity point of view, using a 3 × 3 window, [5] Haghighat, M.B.A., Aghagolzadeh, A., and Seyedarabi, H.: ‘A non-
the size of the JPDF in each block is (3 × 3)2 . Having a sliding reference image fusion metric based on mutual information of image
features’, Computers & Electrical Engineering, 2011, 37, pp. 744-756
window, the number of regional FMIs is n = M × N. Therefore,
[6] Cover, T.M. and Thomas, J.A.: ‘Elements of Information Theory’, Wiley
the total number of calculations is only 81 × n, which is 81/n Series in Telecommunications and Signal Processing, 2006
times of the previous method. Here the complexity has changed
[7] Nelsen, R.B.: ‘Discrete bivariate distributions with given marginals
from O(n2 ) to O(n), which is a significant reduction. and correlation: Discrete bivariate distributions’, Communications in
Statistics-Simulation and Computation, 1987, 16, pp. 199-208
[8] Hossny, M., Nahavandi, S. and Creighton, D.: ‘Comments on ‘infor-
IV. C ONCLUSION mation measure for performance of image fusion”, Electronics Letters,
2008, 44, pp. 1066-1067
In this paper, a new model for calculating the feature [9] Piella, G. and Heijmans, H.: ‘A new quality metric for image fusion’,
mutual information metric is presented. The new Fast-FMI Proceedings of International Conference on Image Processing (ICIP),
method is not only consistent with the previous FMI metric 2003, 3, pp. 173-176
and subjective results, but also significantly reduces the time [10] Xydeas, C.S. and Petrovic, V.: ‘Objective image fusion performance
complexity of the algorithm from quadratic, i.e., O(n2 ), to measure’, Electronics Letters, 2000, 36, pp. 308-309
linear, i.e., O(n), complexity. Since the new algorithm is [11] Zhang, Z. and Blum, R.S.: ‘A categorization of multiscale-
real-time, it can be used for optimization purposes. Various decomposition-based image fusion schemes with a performance study
for a digital camera application’. Proceedings of the IEEE, 1999, 87(8),
experiments verify the performance of the proposed algorithm 1315-1326.
in fusion quality assessment. Furthermore, the reduction in [12] Rockinger O.: ‘Image sequence fusion using a shift-invariant wavelet
complexity is mathematically proven. transform’, International Conference on Image Processing, 1997, 3, pp.
288-291
R EFERENCES [13] Li H., Manjunath B.S., Mitra S.K.: ‘Multisensor image fusion using
the wavelet transform’, Graphic Models Image Processing, 1995, 57,
[1] Haghighat, M.B.A., Aghagolzadeh, A., and Seyedarabi, H.: ‘Multi-focus pp. 235-245
image fusion for visual sensor networks in DCT domain’, Computers &
[14] Toet, A.: ‘Image fusion by a ratio of low-pass pyramid’, Pattern
Electrical Engineering, 2011, 37, pp. 789-797
Recognition Letters, 1989, 9 pp. 245-253
[2] Haghighat, M.B.A., Aghagolzadeh, A., and Seyedarabi, H.: ‘Real-time
[15] http://www.imagefusion.org
fusion of multi-focus images for visual sensor networks’, 6th Iranian
Machine Vision and Image Processing (MVIP), 2010, pp. 1-6
[3] Qu, G., Zhang, D. and Yan, P.: ‘Information measure for performance
of image fusion’, Electronics letters, 2002, 38, pp. 313-315
[4] Cvejic, N., Canagarajah, C.N. and Bull, D.R.: ‘Image fusion metric based
on mutual information and Tsallis entropy’, Electronics Letters, 2006, 42,
pp. 626-627