Fusion_of_Infrared_and_Visible_Images_Based_on_Image_Enhancement_and_Feature_Extraction
Fusion_of_Infrared_and_Visible_Images_Based_on_Image_Enhancement_and_Feature_Extraction
Abstract—In order to obtain night vision fusion image, which is while the infrared image provides redundant information, the
more suitable for human visual perception, this paper proposes fusion of the two images reduces the original information of
an infrared and visible image fusion method based on visible the visible image. Resulting in fused images is not conducive
image enhancement and infrared feature extraction. Firstly, to human visual perception.
guided filtering and dynamic range compression are used to In order to solve the problems of the above algorithms,
enhance the visible image adaptively. At the same time, the we propose an infrared and visible image fusion method
infrared pixel value is used as the index factor to transform the based on image enhancement and feature extraction. Before
visible image exponentially to extract the infrared feature fusing, we perform dynamic range compression to improve
information. Finally, the hybrid multi-scale decomposition
the contrast of visible image. At the same time, we use the
method based on guided filter is used to fuse the images.
Experimental results show that the fused image has clear
pixel value of the infrared image to transform the visible
background details, outstanding thermal targets, and is image exponentially, and extract the infrared features and
superior to the compared methods, both in the visual quality highlight the hot target. This process transforms the task of
and objective evaluation. infrared and visible image fusion into similar image fusion.
After that, the exponential transformed image is fused with
Keywords- image fusion; image enhancement; infrared the enhanced visible light image, which further highlights the
feature extration; multi-scale image decomposition characteristics obtained by the visible light band.
Furthermore, the hybrid multi-scale image decomposition
I. INTRODUCTION method(HMSD) based on guided filtering is adopted to
decompose the source images into the base layer, the edge
Night vision imaging technology is a kind of layer and the texture layer. The different fusion rules for the
photoelectric technology which realizes night observation by design of three image layers at different scale levels are
means of photoelectric imaging device, including low light fused separately. Finally, an image, which is suitable for
level night vision and infrared night vision. Because the human visual perception, can be obtained. Experimental
imaging of low-light-level night vision and infrared night results show that the algorithm can not only highlight the
vision are complementary, and they are not suitable for the infrared targets, but also can preserve the visible image
visual perception of human eyes, a new image fusion information better.
technique is derived, which integrates visible image with
infrared image[1].The fused image can provide more II. ADAPTIVE ENHANCEMENT ALGORITHM FOR
information than each single image, and realize the NIGHT VISIBLE IMAGE BASED ON GUIDED
enhancement of night vision scene. Therefore, in order to FILTERING
enhance night vision and improve the effectiveness of
In order to improve the image visibility to assist
weapon equipment in obtaining information, situation
observers under poor lighting conditions, Zhou et al, propose
perception, cooperative mobility and strategic support at
a dynamic range compression algorithm based on guided
night and under bad conditions, it is necessary to study more
filtering[3]. In this paper, the adaptive enhancement of
effective fusion algorithms.
nocturnal visible image is carried out by using this algorithm.
In recent years, a lot of researches have been done on
Guided filtering is a kind of image filter based on local linear
pixel-level image fusion algorithms at home and abroad,
model. It can keep the edge information of image and is
such as multi-scale image decomposition based fusion
widely used in the field of image smoothing and
methods[2], and so on. These algorithms can save the
compression.
information of different source images well, but the detail of
The adaptive enhancement algorithm based on guided
the visible image captured at night is poorer. If the fusion is
filtering[4] consists of three steps: two-scale image
directly conducted, the overall clarity of the fusion result is
decomposition, dynamic range compression and contrast
not ideal. So, it is necessary to enhance the contrast of the
correction. Firstly, assuming I is an input image which is
visible image before fusion. At the same time, in a low
normalized to [0,255], let Ib = GFr,e(I) be the filtered image
thermal contrast environment, background details such as
processed by the guided filtering, r and e are parameters
plants or land are clearly displayed in the visible light band,
*The Corresponding Author
213
Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 10:58:31 UTC from IEEE Xplore. Restrictions apply.
highlights the features obtained in the visible light band, features. And it can be used to determine the fusion weights
which lays a foundation for improving the contrast of the of infrared image information. First, identify important
fusion and human visual perception in the subsequent image infrared spectral features at all levels:
fusion. | D ( j ,1) | − | Dr( j ,1) |, | Dr( j ,1) | − | Dr( j ,1) |> 0
Rj = r (11)
IV. IMAGE FUSION 0, Otherwise
The dark details of the original visible image and the Next, normalize R:
high temperature variable target of the infrared image are P j = R j / max x∈Ω {R j )( x )} , P j ( x ) > 0
(12)
enhanced by dynamic range compression and exponential Then, the fusion weights are obtained by adjusting the
transformation, respectively. Zhou et al. [3] propose a following non-linear functions:
HMSD image fusion method based on guided filtering, C j = g σ c * Sτ ( P j )
(13)
consists of three steps: using guided filtering to decompose
the source image into multi-scale base layers, edge layers
S τ ( x ) = arctan(τ x ) / arctan(τ ) (14)
2
and texture layers; Different rules are used to fuse different g σ c ( x ) = exp( − 0.5 x / σ ) / (2πσ )
2 2
c c
(15)
layers. Finally, the fusion image is reconstructed by
For local smoothing and noise reduction. The results
combining each layer. The fusion steps are as follows:
show that the parameter τ can be used to control the relative
Step1: set GF1 as a guidance filter with a value of σ=104 amount of infrared spectral information injected into visible
for maintaining the smoothing characteristics of the image; image through the equation, and the larger the value of τ is,
Step2: set GF2 as a guidance filter with a value of σi+1=σi the more spectral information is fused at last. The
decomposition information at the large-scale level is then
/4 for ensuring that edge information can be preserved while
combined by using the following formula:
continuously deleting texture details using guided filtering
DF( j ,i ) = C j Dr( j ,i ) + (1 − C j ) Dvj ,i , ( j = 2,....., n )(i = 0,1)
cascades; (16)
Finally, the decomposed information is merged into:
Step3: sett ri+1=2ri for the both filters to ensure that the
filter size is doubled at each level; B F = C b B r + (1 − C b ) B v (17)
Step4: input the grayscale image I, then filtered by GF1 n
C b = g σ b * Sτ ( P )
and GF2, get I1, I 1 ; (18)
Usually σb = 4rn is used for smoothing, which makes it
Step5: perform the following calculations:
easy to combine the information of the image. After all the
D j ,0 = GFrj -1,eˆ ( Iˆ j -1 )-GFrj ,ej ( I j ) (8) decomposed information has been combined at different
scale levels, their information is synthesized to produce the
D j ,1 = GFrj ,ej ( I j )-GFrj ,eˆ ( Iˆ j ) (9) final fusion image.
Where j=1,2...n denotes the level of decomposition,
D(j,0) denotes texture information, D(j,1) denotes edge V. EXPERIMENTAL RESULTS AND ANALYSIS
information, we assume that the original image is I, and the In order to verify the availability of the algorithm, We
basic information of multi-scale decomposition is B.
choose algorithms such as DWT[2], DTCWT[2], NSCT[8],
The infrared image source and the visible light image
GFF[9] and HMSD as comparative experiments.Firstly, We
source are decomposed and the decomposed information is
compare the visual effects of the fusion results from the
obtained. According to three different scale levels, that is,
subjective observation of the human eye, and then use the
small-scale level (Texture Layer), large-scale level (Edge
objective evaluation index to evaluate the fusion image. We
Layer) and base-level (Base Layer), different combinatorial
select four persuasive evaluation indexes, QMI, QY, QP, QCB,
algorithms are adopted.
which are based on image mutual information, image
Typically, for small-scale level, select only the top-level
structure similarity, image feature, human visual perception
decomposition information (j=1). Fusion weights at this level
inspired fusion, to evaluate the fusion effect objectively[7].
are determined by absolute maximum selection: A higher score indicates a better performance for all these
1, D r(1,i ) > D v(1,i ) metrics.
C (1,i ) = ( i = 0,1) (10)
0, 0 The fusion results of different methods are shown in
The large-scale level includes the decomposition level Fig.3-4. Fig.3 is the fusion results of the “Road” source
from j=2 to j=n. since the prominent infrared spectral image, and Fig.3(c) is the result of the DWT method. To
information is usually a large-scale feature with prominent some extent, the information of the two image sources is
edges, the decomposed edge features of infrared images
usually correspond to the important infrared spectral fused. However, the overall brightness is lower, the visual
214
Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 10:58:31 UTC from IEEE Xplore. Restrictions apply.
information provided is less, and there is visual distortion. observation, and the fusion image has outstanding thermal
The result of DTCWT (Fig.3 (d))and NSCT (Fig.3 (e) shows target and almost no artifact.
that the image is too smooth because of the over-fusion of In Fig.4, the DWT and DTCWT methods incorporate too
infrared information, resulting in the loss of detail; GFF much dark infrared spectral information, andsome of the
(Fig.3 (f)) better preserves the edge information but neglects details are unclear; NSCT and GFF have edge blurring
it. Based on the basic information, the fused image is phenomenon. HMSD eliminates the edge artifact, but it can
between visible light image and infrared image, which is not provide complete information about the scene. The
blurred. The overall effect of HMSD (Fig.3 (g)) is better, but proposed algorithm can not only keep the details of the
the processing effect of Fig.3 (g) is much more obvious than background, but also make the hot target highlight. Through
that of low-light area. There are still some problems in image the above analysis, we can see that the fusion image obtained
fusion of low-light area to provide less visual information. by the proposed method is more suitable for human visual
Fig.3 (h) is the result of our method, because it is displayed perception.
in visible light band, so it is more suitable for human eye
215
Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 10:58:31 UTC from IEEE Xplore. Restrictions apply.
The objective performance indexes of the above 2 groups QCB, and the second place on metric QY. For the “Kayak”
of source image fusion results are shown in Table 1. From source images, the poposed mehtod is higher than the other
which, it can be seen that, for the “Road” source images, the five methods both in QMI, QY, QP, QCB, which shows that the
proposed method obtains the first place on metrics QMI, QP, performance of the fusion rule is better.
TABLE 1 OBJECTIVE CRITERIA OF THE FUSED IMAGES BASED ON DIFFERENT FUSION METHODS
Fusion methods
Source images metrics
DWT DTCWT NSCT GFF HMSD proposed
QMI 0.3706 0.2835 0.3836 0.2994 0.2604 0.3932
QY 0.6891 0.7394 0.7074 0.8906 0.7820 0.8441
Road
QP 0.3453 0.3443 0.3824 0.4886 0.3767 0.5187
QCB 0.3287 0.4141 0.4195 0.4444 0.4806 0.6046
QMI 0.4644 0.4168 0.4469 0.4218 0.4534 0.5850
QY 0.7258 0.7459 0.7571 0.8015 0.7788 0.8723
Kayak
QP 0.4583 0.4319 0.3779 0.4373 0.4024 0.6144
QCB 0.4255 0.4597 0.4286 0.5640 0.4048 0.5992
[2] Li S, Yang B, Hu J, “Performance comparison of different multi-
VI. CONCLUSION resolution transform for image fusion”, Information Fusion, vol.
12(2), pp.74-84, 2011.
In this paper, we propose an improved night vision image [3] Zhou Z , Dong M , Xie X , et al., “Fusion of infrared and visible
fusion method. Before fusion, we first perform dynamic images for night-vision context enhancement”, Applied Optics, vol.
range compression on visible image on the basis of guided 55(23), pp.1-10, 2016.
filtering to achieve contrast enhancementt and highlight the [4] Kaiming He, Jian Sun, Xiaoou Tang, “Guided image filtering”, IEEE
details of the dark area. At the same time,we use exponential Transactions on Pattern Analysis and Machine Intelligence, vol. 35(6),
transform to extract infrared image features,keeping details pp.1397-1409, 2013.
while highlighting high-temperature targets. After that, We [5] Zhu H, Liu Y, Zhang W, “Infrared and Visible Image Fusion Based
use HMSD algorithm to decompose the processed image into on Contrast Enhancement and Multi-scale Edge-preserving
Decomposition ”, Journal of Electronics & Information Technology, ,
basic layer, edge layer and texture layer, and adopt different vol. 40(6), pp.1294-1300, 2018.(in Chinese)
fusion weights for different scale levels.Finally, we further [6] Zhu H, Liu Y, Zhang W, “Night-vision Image Fusion Based on
highlight the characteristics of the two image sources, Intensity Transformation”, Journal of Electronics & Information
effectively retain the important visual information of the Technology, vol.41(3), pp.640-648, 2019.(in Chinese)
source image. Experimental results show that this method is [7] Liu Z, Xue Z,et al,“Objective Assessment of Multiresolution Image
superior to other fusion methods in visual quality and Fusion Algorithms for Context Enhancement”,vol.34, pp.94-109,
objective evaluation. 2012.
[8] Chen Z, Yang X, Zhang C, et al. “Infrared and visible image fusion
REFERENCES based on the compensation mechanism in NSCT domain”, Chinese
Journal of Scientific Instrument, vol. 37(4), pp. 860-870, 2016. (in
[1] Jin X, Jiang Q, Yao S, et al, “A Survey of Infrared and Visual Image Chinese)
Fusion Methods”, Infrared Physics & Technology, vol. 85, pp.478-
[9] Li S, Kang X, Hu J. Image fusion with guided filtering[J]. IEEE
501, 2017.
Transaction on Image Processing, 2013, 22(7): 2864-2875.
216
Authorized licensed use limited to: Jain University. Downloaded on February 14,2025 at 10:58:31 UTC from IEEE Xplore. Restrictions apply.