CT103 2
CT103 2
| |
| | | |
|
| |
|
| |
| |
|
\ . \ .
\ .
(1)
2
1
arctan
v
H
v
=
2 2
1 2
S
v v
= + (2)
Proceedings of the 2012 2nd International Conference on Computer and Information Application (ICCIA 2012)
Published by Atlantis Press, Paris, France.
the authors
0008
B. Inverse IHS transform
1
2
1 1/ 2 1/ 2
1 1/ 2 1/ 2
1 2 0
new new
new
new
R I
G v
B v
=
| |
| | | |
|
| |
|
| |
| | |
|
\ . \ .
\ .
(3)
III. NSCT TRANSFORM
NSCT transform is divided into nonsubsampled pyramid
filter bank (NSPFB) [6] and nonsubsampled directional filter
bank (NSDFB) [7]. First using NSPFB decompose the image
into different scales, and then applying NSDFB to
decompose the sub-band image (except low frequency sub-
band image) into different directions, thereby the sub-band
image (coefficient) is obtained. The NSCT is a fully shift-
invariant, multi-scale, and multi-direction expansion that has
a fast implementation. The structure of NSCT is as follow:
Figure 1. The structure of NSCT
A. NSPFB
The nonsubsampled Laplace pyramid which consists of
two-channel nonsubsampled 2-D filter banks. Because the
filter bank is not been nonsubsampled, thus it has the
translational invariance. The ideal pass-band support of the
low-pass filter at the (m,n)
A
Y th stage is the region
( ) ( ) [ ]
2
2 , 2
j j
. Accordingly, the ideal support of the
equivalent high-pass filter is the complement of the low-pass,
i.e. the region
( ) ( ) [ ] ( ) ( ) [ ]
2 2
-1 -1
- 2 , 2 - 2 , 2
j j j j
[1].
B. NSDFB
Nonsubsampled directional filter banks are based on
Bamberger and Smith constructed fan-shaped directional
filter bank [8]. It is constructed by eliminating the
downsamplers and upsamplers and then interpolation is done
to the filter to get the same translational invariance
nonsubsampled directional filter banks. Using
nonsubsampled directional filter banks to decompose a
certain scale sub-band image to n-level direction, it can
obtain
n
2 directional sub-band images which has same size
with original image.
IV. PCNN TRANSFORM
PCNN is a feedback network which consists of a
plurality of neurons connecting to each other, and each of
neuron is composed of three parts: a receiving portion, a
modulation portion and a pulse portion. For image
processing, PCNN is a partially connected, monolayer and
two-dimensional network. The number of neurons is equal to
the number of input image pixels [9]. The structure of PCNN
is shown as follow: Expression, as shown in Eq. (4).
ab
Y
, ij ab
W
L
V
ij
L
ij
ij
S
ij
F
l
ij
U
i j
ij
Y
Figure 2. The structure of PCNN
Wherein (m,n) S is the entry of the neurons and it is the
pixel value at (m,n) , (m,n) L is the total linking
input, (m,n) U is the internal activity, (m,n) Y is the
output, (m,n) is threshold, (m,n)
A
Y is input time constant,
is a pre-set
threshold value and then decays exponentially.
If
> (m,n) (m,n)
k k
U
, neuron generates a pulse, it is called an
ignition, after the k iteration. The number of ignition in
(m,n) represents the information of this point.
V. FUSION RULE
In the process of image fusion, fusion rule is a key factor
to obtain the high-quality fused images. In the following, we
apply an effective fusion rule to image fusion.
A. Image decomposition
We get the luminance I of visible image by applying the
IHS transform Eq. (1), define I component as A and infrared
image as B. Then A, B are decomposed into three bands by
Proceedings of the 2012 2nd International Conference on Computer and Information Application (ICCIA 2012)
Published by Atlantis Press, Paris, France.
the authors
0009
( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
1 , 1
,
1 1
, ,
, exp , ,
, , 1 ,
, exp , ,
, , ,
k k
k l k l mn ab k
a b
k k k
k k k
k k k
F m n S m n
L m n L m n V W Y m n
U m n F m n L m n
m n m n V Y m n
Y m n step U m n m n
=
= +
= +
= +
=
(4)
( )
( )
( )
( ) ( ) ( )
( )
( )
( )
( ) ( ) ( )
( )
( )
( )
( )
( )
( )
( )
( )
, ,
, ,
, , ,
, , , ( , ) , > ( , )
, , , ( , ) , > ( , )
, , + , 2 , ( , )
A B A B i j i j
A B B A i j i j
A B i j i j i j
C m n A m n Y m n Y m n Y m n Y m n
C m n B m n Y m n Y m n Y m n Y m n
C m n A m n B m n Y m n Y m n
= >
= >
= <
(7)
NSCT transform, low frequency sub-band is represented
by (0) A , (0) B , passband sub-band and high
frequency sub-band is represented by
( ) ij
A
( ) ij
B
. Wherein
( ) ij
A
is i th pass band, j th direction sub-band.
B. For the low frequency
Applying the algorithm below to fusion the low
frequency sub-band
1
1
1 2
V
P
V V
=
+
2
1 2
2
V
P
V V
=
+
(5)
1 2
(0) (0) (0) C P A P B = + (6)
respectively,
1
V ,
2
V is the variance of (0) A , (0) B , (0) C is
the low frequency coefficients of fused image C .
C. For the bandpass and high frequency
Fusing bandpass frequency and high frequency sub-
bands by using the method of the PCNN transform. After k
iterations, obtained ( , )
A
Y m n and ( , )
B
Y m n , fusion threshold
is an arbitrarily small number. Fusion rules are shown as
the Eq. (7).
D. The inverse transform
After the steps above, fused coefficients (0) C
( )
( )
,
,
i j
C m n
are generated. First of all, obtaining fused
component
new
I by the inverse NSCT transform. Then, the
fused image C is gained by using the inverse IHS
transform Eq. (3).
VI. SIMULATION
To evaluate the performance of the proposed image
fusion method, two experiments have been performed on
two sets of visible and infrared images; Octec of
size224*168, Trees of size 800*600. For the proposed
fusion rule, the experiment results show that a relatively
better fused image can be obtained if 0.2
l
V = ,
20 V
= , =1
l
, 2
=
=
(9)
in which (i) P is the probability of the pixel value i . Higher
entropy indicates more information in the fused image.
B. Average correlation coefficient
It reflects the similarity and spectral maintain
performance of the fused image with the two original
images. Higher average correlation coefficient indicates
more information in the fused image.
( ) ( )
( ) = 2
AC BC
C C C + (10)
( )
( )( )
( ) ( )
1 1
2 2
1 1 1 1
m n
ij ij
i j
XY
m n m n
ij ij
i j i j
X X Y Y
C
X X Y Y
= =
= = = =
=
(11)
C. Spatial Frequency
Spatial frequency is an evaluation of image clarity
variable, reflecting the activity level in the image. The
definition is:
Proceedings of the 2012 2nd International Conference on Computer and Information Application (ICCIA 2012)
Published by Atlantis Press, Paris, France.
the authors
0011
( ) ( ) [ ] ( ) ( ) [ ]
1 1
2 2
=1 =1
1
, - , -1 + , - -1,
M N
i j
SF F i j F i j F i j F i j
MN
=
(
(
(12)
Figures shown in table.1 and table2 indicate that the
method proposed in this paper makes fused image obtained
maximum entropyhighest average correlation coefficient
and spatial frequency.
VIII. CONCLUSIONS
From the above analysis and experimental results, we
can draw a conclusion that the proposed method takes the
advantages of more valuable information and details of the
outstanding characteristics contained in fused images, as
well as better visual effects. However, this method takes
long time, and not suitable for real-time processing. In order
to balance fusion time and fusion effect, the high frequency
was decomposed to four directions. The method just has one
more direction compared with DWT and lifting-DWT
method, so the fusion effect promotion is not particularly
large. NSCT transform can decomposed image into any
amount direction, the more direction of the division are
made, the better fusion effect will be got, and combined
with the integration of PCNN will need more time.
ACKNOWLEDGMENT
This study was partially supported by the National
Natural Science Foundation of China(Grant No. 61077079),
and by the Ph.D. Programs Foundation of Ministry of
Education of China(Grant No. 20102304110013)and by
the key program of Heilongjiang Nature Science foundation,
and by the International Exchange Program of Harbin
Engineering University forInnovation-oriented Talents
Cultivation. .
REFERENCES
[1] L. D. Cunha, J. P. Zhou and N. D. Minh, The nonsubsampled
contourlet transform: Theory, design, and applications, IEEE Transl.
Image Processing, vol. 15, no. 10, pp. 30893101, October 2006.
[2] R. Eslami and H. Radha, Translation-invariant contourelt transform
and its application to image denoising, IEEE Transl. Image
Processing, vol. 15, no. 11, pp. 33623374, November 2006.
[3] R. Eckhom, H. J. Reitboeck, M. Arndt and P. Dicke, Feature linking
via synchronization among distributed assemblies, Mit Press
Journals, vol. 2, no. 3, pp. 293307, 1990.
[4] B. C. Xu and Z. Chen, A multi-sensor image fusion algorithm based
on PCNN, WCICA2004, pp. 36793682, June 2004.
[5] C. X. Lu, Q. Pan, Y. M. Cheng, New image fusion method based on
HIS and wavelet transform, Application Research of Computers, vol.
25, no. 2, pp. 36903695, February 2008.
[6] L. Tang, F. Zhao and Z. G. Zhao, The nonsubsampled contourlet
trams-form for image fusion, ICWAPR07, pp. 305310, November
2007.
[7] G. F. Xie, J. W. Yan, Z. Q. Zhu and B. G. Chen, Image fusion
algorithm based on neighbors and cousins information in
nonsubsampled contourlet tramsform, ICWAPR07, pp. 17971802,
November 2007.
[8] R. H. Bamberger, A filter bank for the directional de-composition of
images:Theory and design, IEEE Transl. Signal Processing, vol. 40,
no. 4, pp. 882893, 1992.
[9] J. L. Johnson, D. Ritter. Observation of periodic waves in a pulse-
coupled neural network, Optics Lett, vol. 18, no. 15, pp. 12531255,
1993.
[10] H. Chen and Y. Y. Liu, An infrared image fusion algorithm based on
lifting wavelet transform, Laser and Infrared, vol. 39, no. 1, January
2009.
Proceedings of the 2012 2nd International Conference on Computer and Information Application (ICCIA 2012)
Published by Atlantis Press, Paris, France.
the authors
0012