Complex Function
Complex Function
Results in Optics
journal homepage: www.elsevier.com/locate/rio
A R T I C L E I N F O A B S T R A C T
Keywords: A complex function is proposed as a template for the fusion of visual and infrared images such that one of the
Visible and infrared imaging partial images is chosen as the real part and the other one as the imaginary part of the complex function. The
Image processing amplitude and the phase of the complex function are two generic algorithms for the synthesis of the amplitude
Image fusion and phase images. Several no–reference quality indices are measured for fused amplitude and phase images
Image quality assessment
and compared to those obtained for the partial images and the images fused with the average fusion,
Laplace pyramid transform, and modified discrete wavelet transform methods for four observation scenes.
1. Introduction Recently proposed techniques for the fusion of visible and infrared
images are based on sophisticated image processing procedures such
The invention of pixel image acquisition opened wide possibilities as multi–scale transform with guided filtering (Li et al., 2013), visual
for image processing. One of them is the fusion of either mono–modal attention guided image fusion with sparse representation (Yang and
images, collected by a single camera at different acquisition condi- Li, 2014), a combination of multi–scale transform and sparse represen-
tions, or multi–modal images, obtained in different ranges of the elec- tation (Liu et al., 2015), subspace approach based on fourth–order par-
tromagnetic spectrum. tial differential equations (Bavirisetti et al., 2017), saliency detection
The simplest fusion method is based on the addition of pixel data (Bavirisetti and Dhuli, 2016), pulse coupled neural network combined
from partial images by the algorithms of simple addition, weighted, with non–subsampled shearlet transform with employed spatial fre-
and averaged fusion (AF). Though many advanced fusion techniques quency metric (NSCT–SFPCNN) (Kong et al., 2014). All these modern
have been developed (Malviya and Bhirud, 2009; Stathaki, 2008; image fusion methods are by default designed for office work and
Mitchell, 2010; Liu et al., 2015; Cui et al., 2015; Miao et al., 2011; require high–level skills in programming and, in particular, knowledge
Ma et al., 2019), these three algorithms of the addition fusion (AdF) of image processing. Contrary, the AdF method is an express method
method remain popular. The simplicity and robustness of their algo- capable of real–time fusion, being non–demanding in processing time,
rithms based on the mathematical operation of addition provide easy computer power and programming skills of the operator and thus is
predictability of fusion results. Usually, the AdF method is used as a suitable for special needs when the quality of some places on the image
test–method with which the fusion of newly collected images starts. can be sacrificed if the visibility of a target is enhanced. The latter con-
The permanent interest of the researches to this method is confirmed cerns imaging for military target sightseeing systems, when the high
by the fact that the results obtained with the AdF method are often contrast of the target is of key importance, while the background
used as reference data to demonstrate the performance of a new details might be of no interest; for space surveillance; geodesic and
method; the AdF method is mentioned practically in all earlier and marine reconnoiter, when tracking moving objects; for microscopy
recent review papers. The same concerns Laplace Pyramid Transforma- tracking of particles, including living cells in soft matter physics, for
tion (LPT) and Discrete Wavelet Transform (DWT) methods, which UV microscopy and optical confocal microscopy of biological objects
also are quite popular, and for this reason we use these three methods with florescent markers, etc. However, lowered or even vanished con-
as reference methods. trast (Khaustov et al., 2019) is the main drawback of the AdF method.
⇑ Corresponding author.
E-mail address: [email protected] (Y. Nastishin).
https://doi.org/10.1016/j.rio.2020.100038
Received 8 September 2020; Revised 6 November 2020; Accepted 6 December 2020
Available online xxxx
2666-9501/© 2020 The Author(s). Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Y. Khaustov et al. Results in Optics 2 (2021) 100038
In this paper, we propose the method of complex function image fusion tion calculus. According to the CFIF method, one of the two pre–fused
(CFIF), briefly announced in (Khaustov et al., 2020), which is a gener- images from the two channels is chosen as the real part and the other
alization for the AdF method in the same way as the addition of com- one as the imaginary part of the complex function. It should be noted
plex functions is a generalization for the addition of real functions. that in the analogy between the complex functions and the CFIF
Below we demonstrate that the requirements for an express fusion method the pre–fusion of the images within the set of images from
method can be achieved with the phase algorithms of the CFIF method each channel is not necessarily the simple addition of images. The
which also provide the enhanced visibility of a target, intensively emit- pre–fusion can be done with any of the known image fusion methods
ting in the IR spectrum. One of the features which determine the place or a combination of different methods. A recent survey on image
of the proposed CFIF method among other existed image fusion meth- fusion methods for infrared and visible images can be found in (Ma
ods is the simplicity of its amplitude and phase algorithms. This is a et al., 2019). To be explicit, when introducing the principles of the
fusion method processing data on the lowest, pixel level. The simplicity CFIF method in this paper, we assume that the multiple images have
and speed of the fusion algorithm are important regarding the develop- been pre–fused within each channel, and we deal with the two result-
ment of compact cameras in which the partial images are collected in ing Vis– and IR–images, u and υ, respectively.
different spectral regions and then a single fused image of improved The employment of a complex function as a template for the fusion
quality is displayed on the camera display. Such compact smart cam- of multimodal (here Vis– and IR) images is also inspired by the fact
eras are preferable when bulky registration systems cannot be used, that multimodal images carry different information on the same scene,
for example, out of a hospital for a non–transportable patient, for mil- picturing the scene in quite different ways. For example, images
itary needs in target sightseeing systems of motor vehicles (tanks, air- obtained with a conventional digital camera in the visible light are
craft, ships, submarines), where the size limits for the devices is of positive images with a similar contrast distribution as it is in a real
crucial importance; etc. The simplicity of the algorithm is also impor- scene observed by naked eyes. With a thermal camera, usually, the
tant when the operator who uses the fused image (for military target- image of the same scene looks rather like a negative image. The origin
ing, for example) has no special skills in programming. of this difference is rooted in the different principles of registration
Since the proposed CFIF method does not pretend to be an alterna- with the visual and thermal camera. The visual image is obtained
tive to the image fusion methods, designed for office work as well as due to the interplay between the light transmission and reflection,
those for fine–art photography, we compare the proposed CFIF method whereas the thermal camera registers the IR light, emitted and
with simple fusion methods AdF, LPT, DWT, which can be considered reflected by the objects. As a result, with the conventional visual cam-
as express fusion methods. In this paper, by quantitative assessment of era an object of interest (a target) in most cases looks dark on a bright
the images obtained using the CFIF method, we demonstrate that the background (positive contrast), whereas with the thermal camera, the
CFIF method is an advanced alternative for the AdF method as an target, usually, looks bright on a dark background (negative contrast).
express fusion method bearing its simplicity while providing image Because of the opposite signs of contrasts at the Vis and IR imaging,
quality which is comparable to that obtained with other fusion their simple addition might result in significant lowering or even van-
techniques. ishing the contrast of the fused image (Khaustov et al., 2019). Intu-
In Section 2, we explain the motivation to choose a complex func- itively it is understood that the simple addition of the Vis and IR
tion as a template for image fusion and present the principles of the images should not be performed because the Vis and IR images repre-
CFIF method. Section 3 introduces the quality indices used for the sent two different sets of images that carry the information on different
quantitative no–reference characterization of the partial and fused optical properties as well as because they cannot be transformed into
images. Section 4 presents results of quality assessment for four pairs each other by changing the conditions of observations (lightning,
of Vis– and IR–images, which are fused by the amplitude and phase focusing, exposure, etc.).
algorithms of the CFIF method as well as by three popular fusion meth- Instead, the relation between the Vis and IR images resembles such
ods of Average Fusion (AF) (Malviya and Bhirud, 2009; Stathaki, physical properties as the complex index of refraction n þ iκ with n and
2008); Laplace Pyramid Transformation (LPT) and Discrete Wavelet κ being respectively the refractive and absorption indices, complex
Transform (DWT) (Mitchell, 2010; Ma et al., 2019; Hryvachevskyi, elliptical birefringence Δnl þ iΔnc with Δnl and Δnc being respectively
2018). The advantages and limitations of the CFIF method in compar- the linear and circular birefringence in optics, complex viscoelastic
ison with the AF, LPT, and DWT are discussed in Section 5. Section 6 moduli in hydrodynamics, complex frequency and complex resistance
concludes our results. in electronics and many other properties, which are described by com-
plex functions. In all these cases different pairs of physical properties
are different reactions to the same action. For example, the pairs of
2. Complex function as a template for image fusion material optical parameters (n, κ) and (Δnl ,Δnc ) describe different phe-
nomena of the light–matter interaction and for this reason, they are
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2.1. Why a complex function? not added directly. Instead, the value n2 þ κ2 is the refractive index
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi
of a light–absorbing material, Δn2l þ Δn2c is the elliptical birefrin-
In most cases, the images collected from different detection chan- gence of a birefringent gyrotropic optical material. Analogically, the
nels are fused by pairs for the same scene of observation. If multiple Vis–image is formed mostly due to the light reflection, which is related
images of the same scene are available from two different channels to the real relative refractive index n, whereas the IR image is formed
(let say, visible (Vis–) and infrared (IR–) images) one is faced against due to the irradiation of the IR light, which is treated in optics as the
uncertainty in the order of selection of pairs for image fusion. In such negative light absorption (the convention of the irradiation as the neg-
a case, intuitively one would decide first to pre–fuse the images sepa- ative absorption is popular in laser optics, but not only).
rately within the set of images from each channel, and then to fuse the The three arguments expressed above : 1 – pre–fusion of multiple
pair of two images resulted from the pre–fusion within the channels. It images in two channels, 2 – opposite signs of the contrast in the Vis
is worth noting, that such a fusion operation resembles the mathemat- and IR images, 3 – different optical phenomena behind the formation
ical operation of summation of complex functions at which the real of Vis and IR images, inspire us to choose a complex function as a tem-
(imaginary) part of the sum of complex functions is the sum of real plate for the fusion of Vis and IR images. Taking into account that a
(imaginary) parts of the summed complex functions. In other words, complex function can be uniquely defined by its amplitude and phase,
the real and imaginary parts of the summed functions are summed sep- pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
one can construct one amplitude algorithm u2 þ υ2 and, at least, four
arately and then the resulted real and imaginary parts of the complex
different phase algorithms to fuse the two partial images.
function can be further treated according to the rules of complex func-
2
Y. Khaustov et al. Results in Optics 2 (2021) 100038
2.2. CFIF algorithms u
φɛpos ¼ arctan ð13Þ
υþɛ
In the computer format, a digital image is presented in a form of a
In the negative algorithms, Eqs. (10) and (12) one takes ɛ ≪ u, and
brightness table. The pair of coordinates ðx; y Þ of a pixel in a row and
in the positive algorithms, Eqs. (11) and (13), the corresponding con-
column of the brightness table one–to–one corresponds to the in–plane
dition is ɛ ≪ υ. It is worth noting that due to the operation of division
coordinates of a point of the computer image and thereby the bright-
the maximal brightness, calculated from Eqs.(10)–(13), can become
ness tables of the images from the visual (Vis) and infrared (IR) chan-
higher than 1, while for the partial images the highest possible bright-
nels can be considered as functions uðx; yÞ and υðx; y Þ; respectively.
ness is equal to 1. For the t–images, Eqs.(10), (11) the maximal calcu-
This pair of functions corresponds to the same real scene imaged by
lated brightness can be as high as 1=ɛ, which for example, for ɛ ¼ 105
two different channels and, thus, can be used to form a complex func-
is 105 . For the φ–images, the highest possible calculated brightness
tion ψ ðx; y Þ, in which either uðx; yÞ or υðx; y Þ function is chosen as the
approaches π=2≈1:57.
real part, whereas the other one as the imaginary part. Then we say
On one hand the extension of the range of possible brightness values
that the complex function ψ ðx; y Þ describes a virtual complex fused
potentially enhances the contrast of the fused image in comparison with
image. Since there is no restriction on which of the two functions (u
the partial images (Khaustov et al., 2019, 2020). However, on the other
or υ) has to be chosen as the real part and which one has to be the
hand, all brightness values higher than 1 will be clipped by a computer
imaginary part, the complex function ψ can be constructed in two
program when plotting the image; such pixels will be imaged as points
forms (Khaustov et al., 2020)
of the maximal brightness equal to 1. As a result, some of the image
ψ neg ¼ u þ iυ ð1Þ areas may become blown–out, such that, white objects will appear
invisible on the white background. To avoid the effect of clipping the
ψ pos ¼ υ þ iu ð2Þ brightness values should be normalized. For the t– and φ–images there
pffiffiffiffiffiffiffi are at least two possibilities for normalization. The first, possibility is to
where i ¼ 1 is the imaginary unit, the sub–script indices neg and pos
increase the value of the parameter ɛ in Eqs. (10)– (13). The second pos-
correspond to the negative and positive image formats in the conven-
sibility is to divide the calculated brightness values by their maximal
tional sense of positive and negative photography. We will return to
value. For the amplitude algorithm, the normalization can be done
the notions of positive and negative fused images in Section 4.1.2. pffiffiffi
either by dividing by 2 or by the maximal brightness value of the given
In the framework of complex function calculus, a complex function
fused image. Further, throughout the text, we call the non–normalized
can be expressed either in the rectangular, Eqs. (1) and (2), trigono-
algorithms given by Eqs.(5) and (10)– (13) the raw algorithms to distin-
metric or exponential forms
guish them from the normalized algorithms. The normalization of the
ψ neg ¼ jψ j cosφneg þ isinφneg ¼ jψ jeiφneg ð3Þ image brightness will be discussed in Section 4.
ψ pos ¼ jψ j cosφpos þ isinφpos ¼ jψ jeiφpos ð4Þ
3. Image quality indices
where
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi To perform a quality assessment of the partial and fused images we
jψ j ¼ u2 þ υ2 ð5Þ
employ quantitative objective no–reference metrics measuring Con-
is the amplitude of the complex function, trast (C), brightness Gradient (G), Standard Deviation (SD), Number
υ of Brightness Levels (Nb,), image Entropy as well as combinations of
tanφneg ¼ ¼ t neg ð6Þ the indices, such as the Integral Index (InI) (Hryvachevskyi, 2018;
u
Bogdanov and Romanov, 2012; Bondarenko et al., 2017), the Index
u
tanφpos ¼ ¼ t pos ð7Þ of Gradient Transfer Qg (Xydeas and Petrović, 2000) and the Index
υ
of Block Similarity Qb; (Cvejic et al., 2005; Wang and Bovik, 2002) .
with These eight indices can be classified into three groups. The first group
υ includes the following image quality indices C; SD, G, Nb, and the InI
φneg ¼ arctan ð8Þ
u index, which by their physical sense are the measures of the coordinate
u variation of brightness (CVB group). The second group includes Qb and
φpos ¼ arctan ð9Þ Qg, which are the measures of the efficiency of transfer (ET) of the CVB
υ
characteristics of the partial images to the fused image (ET–CVB
being the phases of the complex functions ψ neg , Eq. (1), and ψ pos , Eq.
group). The image entropy is the measure of the amount of informa-
(2). Therefore, using the template of complex function the partial Vis– tion in the image and represents the third group.
and IR–images can be fused either as an amplitude (Amp–) image, Eq.
(5), or as four phase images, namely: two positive (t pos , Eq. (7) ,φpos , Eq.
(9)) and two negative (t neg , Eg. (6), φneg , Eq. (8)) phase t– and φ–images. 4. Examples of image fusion
It should be noted that the mathematical operation of division in
Eqs. (6)–(9) implies divergence to the infinity if the brightness value The pairs of Vis and IR partial images and the results of their fusion by
of the function, that stands in the denominator, falls to zero. Such a sit- the amplitude and phase CFIF algorithms as well as by the algorithms of
uation is quite plausible since in many cases the images are taken at average fusion (AF), Laplace Pyramid Transform (LPT), and the Discrete
low illumination conditions. To overcome this computational problem, Wavelet Transform (DWT) for four different scenes ‘NAA Campus’, ‘Gua-
we add a small parameter ɛ ≪ 1 to the denominators in Eqs. (6)–(9): nabara Bay’, ‘Hangar’ and ‘Camouflage’ are presented in Table 1.
The partial Vis (full–color) and IR (monochrome) images for the
υ
t ɛneg ¼ ð10Þ scene ‘NAA Campus’ were obtained using, respectively, the visual dig-
uþɛ
ital camera Nikon D3300 and the thermal imaging sight ARCHER
u TSA–9/75–640. Before fusion, the IR ‘NAA Campus’ image was coordi-
t ɛpos ¼ ð11Þ
υþɛ nate–registered with respect to its Vis–counterpart via overlapping
their key points. For all other scenes, the partial Vis and IR coordi-
υ
φɛneg ¼ arctan ð12Þ nate–registered images were downloaded from the Visible–Infrared
uþɛ
Database (Visual and Infrared Database image Fusion, 2020).
3
Y. Khaustov et al. Results in Optics 2 (2021) 100038
Table 1
Partial and fused images.
t
t
4.1. ‘NAA Campus’ phase (t– and φ–) algorithms. The normalization lowers the bright-
ness values, thereby lowering the CVB indices. The latter statement
The partial and fused images of the scene ‘NAA Campus’ are shown will be illustrated below with the experimental data measured for
in the first column of Table 1, and the quality indices measured for the the raw and normalized amplitude algorithms. Since the normaliza-
corresponding images are shown in Fig. 1 by squares. tion lowers the CVB quality indices the normalization is needed
In the framework of the CFIF method, one can fuse images by only if the brightness values of the given raw fused image exceed
the raw (non–normalized) and normalized amplitude (Amp–) and the nominal maximum of the brightness value prescribed by the
4
Y. Khaustov et al. Results in Optics 2 (2021) 100038
Fig. 1. Quality indices of the partial images: 1–Vis, 2– IR and the images fused by the literature methods: 3 – Average Fusion , 4 – LPT , 5 – DWT, and by the CFIF
algorithms: 6 – amplitude image, 7– t–image at ε = 10–5, 8– t–image at ε = 0.2 or ε = 0.6 as indicated in Table 1, 9– positive φ–image for four scenes: ‘NAA
Campus’ (squares), ‘Hangar’ (circles), ’Guanabara Bay’ (triangles) and ‘Camouflage’ (stars).
image format (255 in the absolute scale or 1 in the normalized Open symbols in Fig. 2, which correspond to the nominal–normal-
scale). ized amplitude (nAmp) algorithm, Eq. (14), appear below the filled
The normalization can be performed by dividing the calculated symbols, which correspond to the raw amplitude (rAmp) algorithm,
brightness values by their maximal value (floating normalization), Eq. (5). The latter indicates that the raw image is of higher quality than
measured for the given image or by the nominal maximum (nominal the nominal–normalized amplitude image.
normalization) defined by the algorithm. For example, the nominally Analysis of data shown in Fig. 1 reveals that the rAmp algorithm
possible maximum for the raw amplitude algorithm, Eq. (5), in the applied to the partial images of the ‘NAA Campus’ scene shows the con-
pffiffiffi
normalized brightness scale is 2; for the raw φ–algorithms, Eqs. siderably higher quality of the fused image in comparison with the AF
(12), (13), it is arctan1 ¼ π=2; for the t–algorithms, Eqs. (10), (11), method (a counterpart of rAmp algorithm in terms of averaging) and is
it diverges to 1. Normalization of the images fused by the CFIF algo- of comparable quality with respect to the LPT and DWT methods.
rithms will be discussed in detail and illustrated in application to the
example images below in this section.
4.1.2. Phase t–algorithms
The positive raw t–image (rt–image), fused from the partial Vis
4.1.1. Amplitude algorithm (Table 1 and Fig. 3a) and IR–images (Table 1 and Fig. 3b) using Eq.
The nominal–normalized amplitude (nAmp–) algorithm (11) for the ‘NAA Campus’ scene is shown in the 8th row of the 1–st
column in Table 1 (also in Fig. 3c). It should be noted that the prevail-
1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
jψ jn ¼ pffiffiffi u2 þ υ2 ð14Þ ing blue color of the image is the result of the division of the {R,G,B}
2
brightness values of the full–color Vis–image by the {R,G,B} values of
is nothing else but the root mean square (RMS) for the partial the monochrome IR–image. Consequently, the colors of the rt–image
images, which is an analog of the arithmetic average employed in can be considered as pseudo–colors, appearing due to the mathemati-
the AF method. Fig. 2 shows the CVB indices measured for the raw cal fusion algorithm. It is worth to note that the fusion by the algo-
(filled symbols) and nominal–normalized (open symbols) amplitude rithm given by Eq. (10) gives the negative rt–image (Fig. 3d) with
algorithms, where the numbers on the horizontal axis correspond the prevailing yellow color, which is the complementary color in a
respectively to the scenes 1 – ‘NAA Campus’, 2 – ‘Hangar’, 3 –’Guan- negative image to the blue color in a positive image (Fig. 3c). By
abara Bay’ and 4 – ‘Camouflage’. increasing the value of the ε parameter in Eqs. (11) and (10) from
5
Y. Khaustov et al. Results in Optics 2 (2021) 100038
10–5 (Fig. 3c,d) to 0.2 (Fig. 3e,f) one can transform the colors in the
positive rt–image towards the natural colors (Fig. 3e), which are sim-
ilar to those observed in the partial Vis–image (Fig. 3a, also the 1–st
row of the 1–st column in Table 1) or the colors of the negative rt–im-
age to the monochrome appearance (Fig. 3f) similar to that in the par-
tial IR–image (Fig. 3b, also the 2–nd row of the 1–st column in
Table 1). Data for the quality indices (shown by squares) for the pos-
itive rt–image fused by Eq. (11) with ɛ ¼ 105 correspond to the num-
ber 7 on the horizontal axes in Fig. 1. Contrast C of the rt–image
appears to be, at least, twice higher in comparison with the corre-
sponding values for the partial images as well as in comparison with
other fused images. Such an extraordinary result is a consequence of
the property of the rt–algorithm, which is based on the mathematical
operation of division. It is shown in (Khaustov et al., 2020) that if an
object is imaged on the Vis and IR partial images with the local con-
trasts equal by absolute values but opposite by sign, then the local con-
trast of the fused rt–image is doubled, whereas for the AF–image the
local contrast becomes zero (Khaustov et al., 2019, 2020). It is under-
Fig. 2. CVB indices: C (squares), Nb (circles), SD (triangles), G (rhombs), stood that in real images such an ideal situation might be met in a
calculated for the images fused by the raw– (filled symbols) and normalized– quite few places (if in any), but definitely, the opposite sign of the local
(open symbols) amplitude algorithms for the scenes 1 – ‘NAA Campus’, 2 – contrasts on the partial images enhances the contrast of the rt–image
‘Hangar’, 3 –’Guanabara Bay’ and 4 – ‘Camouflage’
Fig. 3. (Color on–line) ‘NAA Campus’ images: partial (a) Vis– and (b) IR–images; (c) positive rt–image fused by Eq. (11) and (d) negative rt–image fused by Eq.
(10) both with ɛ ¼ 105 ; (e) positive and (f) negative nt–images fused respectively by Eq. (11) and (10) and both with ɛ ¼ 0:2
6
Y. Khaustov et al. Results in Optics 2 (2021) 100038
while lowering it for the AF–image. Indeed, the observed enhancement Fusion, 2020) are respectively ‘Guanabara Bay Outdoor/take_1/VIS/
of the contrast for the rt–image of the ‘NAA Campus’ scene corresponds VIS_4052.jpg’ and ‘Guanabara Bay Outdoor/take_1/IR/IR_4052.jpg’.
to the lowering of the contrast of the AF–image in comparison with the The partial Vis–, IR–images, and the results of their fusion by different
contrasts of both partial images (Fig. 1). methods are shown in the 2–nd column in Table 1. Quality indices for
Since the rt–algorithm is not normalized, according to Eq. (11) the the partial and fused images are shown in Fig. 1 by upward triangles.
calculated brightness values can be higher than 1. Indeed, the mea- The positive rt–image fused by Eq. (11) with ε = 10–5 appears to be
sured {R,G,B} maximal brightness for the rt–image of the ‘NAA Cam- clipped everywhere on the image, except in a few places. The mean
pus’ scene are respectively {98039.2, 109804.0, 874510.0}, which intensity measured for this image is 5.36875, which is significantly
means that some places on the image are clipped. We find, that the higher than 1. This is the consequence of the operation of division
division of the brightness values by the corresponding maximal values by small brightness values for the IR–image. Indeed, the mean bright-
in the RGB channels makes the given rt–image image black every- ness values for the Vis– and IR–images of the scene ‘Guanabara Bay’
where except in a few places. Therefore, the floating normalization are 0.566084 and 0.166875, respectively. Their ratio
should not be applied to the given rt–image. 0.566084/0.166875≈3.4 is considerably higher than 1, which indi-
An alternative possibility for the normalization of the rt–image is cates that a significant part of the positive rt–image will be clipped,
the variation of the ɛ parameter in Eq. (11). The normalized nt–image which is indeed observed in Table 1. It is interesting to compare these
of the ‘NAA Campus’ scene for the value ɛ ¼ 0:2 is shown below the intensity data to the corresponding data for the images of the scene
rt–image (obtained with ɛ ¼ 105 ) in Table 1 (also in Fig. 3e). The ‘NAA Campus’, for which the clipping effect is not pronounced. The
quality indices measured for the nt–image with ɛ ¼ 0:2 correspond mean brightness for the Vis– and IR–images of the scene ‘NAA Campus’
to the number 8 on the horizontal axes in graphs shown in Fig. 1. Data measured over the RGB channels are 0.105165 and 0.175175, respec-
presented in Fig. 1 show that the ε–normalization considerably tively. Their ratio is approximately 0.6, which is lower than 1 (com-
improves the quality of the image in terms of SD, Nb, G, and InI CVB pare to the corresponding value 3.4 for the scene ‘Guanabara Bay’)
indices without lowering the informativity. Visual inspection also con- and thus the clipping effect is not expected, which is indeed confirmed
firms that the quality of the nt–image (Fig. 3e) is improved in terms of by the mean intensity value 0.68 measured for the positive rt–image of
colors which are more close to the natural colors of the partial Vis–im- the scene ‘NAA Campus’ (compare to the corresponding value
age (Fig. 3a) while preserving the details of both images. 5.36875 > 1 for the scene ‘Guanabara Bay’). Visual inspection also
Nevertheless, although the overall quality (by quality indices and confirms this statement. Since the positive rt–image of the scene ‘Gua-
visual appearance) of the nt–image is better in comparison with that nabara Bay’ is clipped on a large area most of its indices are close to
of the rt–image, at some applications the non–normalized rt–image still zero.
might be preferable, when the high contrast is demanded, for example To avoid the clipping effect one has to perform the
in cases of target tracking for military sightseeing systems (Khaustov ε–normalization. Indeed, at ε = 0.6 for the nt–image of the scene ‘Gua-
et al., 2019; Khaustov et al., 2019b), particle or living cells tracking nabara Bay’ the clipping effect is not observed (Table 1). The mean
in scientific experiments (Peng et al., 2016). intensity measured for this nt–image image is 0.755231 < 1 (compare
to the corresponding value 5.36875 > 1 before the normalization for
4.1.3. Phase φ–algorithms the rt–image). The CVB indices (except C) and entropy for the nt–im-
The positive raw φ–image (rφ–image), obtained using Eq. (13) for the age are considerably higher than those for the partial images. In com-
‘NAA Campus’ scene is shown in the last raw of the 1–st column in Table 1. parison with other fused images, most of the CVB indices and entropy
Data for the quality indices (shown by squares in Fig. 1) for the positive are higher. Therefore, the ε–normalization successfully transforms the
rφ–image correspond to the number 9 on the horizontal axes. clipped rt–image into a good–quality image.
As it is expected from the form of the rφ–algorithm, given by Eq. It is interesting that for the negative rt–image (Fig. 4) the clipping
(13), the maximal brightness values for the {R,G,B} channels are mea- effect is not highly pronounced as it is for the positive rt–image,
sured to be {1.57069, 1.57071, 1.57078} which for all the three chan- though still being present in some places: ships, cars on the bridge,
nels are close to the value π=2. The nominally–normalized φ–image the engine of the flight, for example. The clipping effect for the posi-
can be built by the nφ–algorithm of the form tive rt–image obtained by the algorithm given by Eq. (11) is the con-
sequence of the fact that u/(υ + ε)≫1 at ε ≪ 1, which in turn results
2 u
φɛn
pos ¼ arctan ð15Þ from the fact that for the partial images of the scene ‘Guanabara Bay’
π υþɛ
u≫υ. Indeed, for the partial Vis– and IR–images we find the values
Concluding this subsection in which we deal with the images of the of the mean brightness u ¼ 0:566084 > υ ¼ 0:166875. As a result,
scene ‘NAA Campus’ we state that the images fused by the CFIF ampli- the measured mean brightness value for the positive rt–image at
tude and phase algorithms are of better quality, or at least not worse, ɛ ¼ 105 is 5.37 > 1, whereas for the negative rt–image it is
by most of the quantitative indices and by subjective visual inspection 0.31 < 1. Therefore, the synthesis of a negative rt–image (Fig. 4)
in comparison with the images fused using AF, LPT, and DWT methods. instead of a clipped positive rt–image (Table 1, 2–nd column, 8th
The amplitude CFIF algorithm shows better quality than the AF algo- row) is an alternative to the procedure of brightness normalization
rithm for which the amplitude CFIF algorithm is an RMS analog. There- to avoid the clipping effect in the phase t–image. However, it should
fore, the amplitude CFIF algorithm is a good alternative for the be noted that this possibility is not a universal solution to the clipping
AF–algorithm, bearing the algorithmic handedness and simplicity of problem for the rt–images. We will show below that both positive and
the AF–method, but producing considerably better quality of fusion. negative rt–images appear to be affected by clipping for the scene
We intentionally do not oppose the quality of the amplitude and phase ‘Hangar’.
fused images. They are independent mutually complementary images of
the CFIF method similarly as the amplitude and phase are mutually
complementary characteristics of a complex function. This statement 4.3. ‘Hangar’
will be detailed in Section 5.
The addresses of the synchronized and registered partial Vis– and
4.2. ‘Guanabara Bay’ IR–images in the Database (Visual and Infrared Database image
Fusion, 2020) are respectively ‘Hangar/take_3/VIS/VIS_4984.jpg’
The addresses of the synchronized and registered partial Vis– and and ‘Hangar/take_3/IR/IR_4984.jpg’. The partial Vis– and IR–images
IR–images in the Database (Visual and Infrared Database image and the results of fusion by different methods are shown in the 3–rd
7
Y. Khaustov et al. Results in Optics 2 (2021) 100038
Fig. 5. Clipping in the positive (a) and negative (b) rt–images of the scene ‘Hangar’
8
Y. Khaustov et al. Results in Optics 2 (2021) 100038
tial Vis– and IR–images were acquired. For example, the ratio (υ/u)≫1 lowering of the brightness, which in turn is accompanied by lowering
signalizes that in a given area of the phase image one has an object, the CVB indices, and thus it should be applied only if the brightness
which intensively emits infrared waves. From a military point of view, values exceed the clipping threshold, i.e. when the calculated bright-
such an object might be considered as a potential target. The relative ness becomes higher than 1. As long as the normalization can be
brightness information can be important for the problem of the identi- avoided, the raw image is preferable. Therefore the rAmp–image
fication of the target and its tracking. Second, the phase images also which is the RMS counterpart of the s–algorithm can be used without
can be considered as a source of the content information about the tar- the normalization for a broader range of the brightness values of the
gets and their environment in addition to the amplitude image. In partial images in comparison with the s–algorithm. The advantage of
some cases by some indices or even by the overall quality (by quanti- the normalized nAmp–algorithm against the AF–algorithm is that the
tative indices and visual inspection) the quality of the phase images is normalization in the nAmp–algorithm is achieved by the division of
pffiffiffi
considerably better than that of the amplitude image. Such an example the brightness values of the rAmp–image by 2 while for the AF–algo-
was presented above for the nt–image at ε = 0.2 of the scene ‘NAA rithm the raw fused brightness of the s–image is divided by 2, which
Campus’ (Fig. 3e). means that at the same brightness values of the partial images the
Interestingly, the amplitude algorithm, given by Eq. (5), and the nAmp–image will show higher brightness and thereby better CVB
phase φ–algorithm, given either by Eq. (8) or (9) resemble the indices, than those obtained with the AF–algorithm. Therefore, the
so–called indirect HSI (hue–saturation–intensity) transformation for nAmp–algorithm of the CFIF method is an advanced alternative for
a colored RGB image ½I; V 1 ; V 2 tr ¼ M ½R; G; Btr , where the superscript the AF–algorithm.
index tr denotes the transpose operation, M is the 3 3 matrix The amplitude rAmp– and nAmp–algorithms are of the same sim-
[(Mitchell, 2010), pp. 200–201]), I is the intensity, and the hue (H) plicity and convenience as the AF–algorithm, but the quality indices
and saturation (S) are calculated as of the raw (rAmp–) and normalized (nAmp–) amplitude images are
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi considerably higher than those for the AF–image. Although the AF–al-
S ¼ V 21 þ V 22 ð16Þ gorithm is the simplest fusion algorithm and its capability often is con-
sidered to be rather primitive, it remains popular despite the
V1 availability of other advanced sophisticated fusion methods in the lit-
H ¼ arctan ð17Þ
V2 erature. The AF–method is an express–method, which is preferable
when a quick, simple in–use, non–demanding in processing, real–time
The comparison of Eq. (16) to Eq. (5) and Eq. (17) to Eqs. (8), (9)
image fusion is needed. The amplitude rAmp– and nAmp–algorithms,
shows that they are of the same form. However, in Eqs. (16), (17) the
in essence, are advanced upgrades to the AF–method with improved
variables V 1 and V 2 are the components of the HSI luminance scheme
quality indices and thus can serve as an express, real–time fusion
applied to a Vis color image, whereas in Eq. (5) and (8), (9) the vari-
method of the AF type with improved performance.
ables u and υ are the Vis and IR images, respectively. The HSI lumi-
nance scheme is known to enhance the contrast of color images. The
contrast enhancement is achieved due to the operation of division in 5.2. Phase algorithms
Eq. (17) [(Mitchell, 2010), p.80]. The operation of the division was
used for shadow detection in color aerial images (Chung et al., In section 2 we have introduced two forms of the raw (non–normal-
2009; Tu et al., 2001). A contrast–enhanced fusion method for fusion ized) phase algorithms: the t– (Eqs. (6) and (7)) and φ– (Eqs. (8), (9))
of the Vis color image and chromatic IR image, based on the HSI trans- algorithms. To avoid the division by 0 we have added a small param-
form of the Vis image, was proposed by Li and Wang (Li and Wang, eter ε in Eqs. (10)–(13) for the t– and φ–algorithms. We have checked
2007) and claimed to be promising for target tracking, surveillance that the visual appearance of the t– and φ–images does not change at
task when the corresponding color visible images are available. Formal the variation of small ε, at least, up to 10–3. Therefore with ɛ < 103
similarity between the HSI luminance scheme and the CFIF fusion the t– and φ–algorithms still can be considered as the raw algorithms.
method explains the enhanced contrast of the amplitude and phase Throughout the paper, we set ɛ ¼ 105 for the rt– and rφ–algorithms.
CFIF images. Below we discuss the physical sense and main properties
of the amplitude and phase algorithms and then consider the advan- 5.2.1. The t–algorithms
tages and limitations of the images fused by these algorithms. The brightness values which can be obtained with the rt–algorithm
fall in the range ½0; 1½. All the brightness values exceeding the bright-
5.1. Amplitude algorithms ness limit 1 (forbidden brightness values) will be clipped by the image
software when displaying the image on the computer display and thus
The raw amplitude (rAmp–) algorithm, Eq. (5), is an analog of the the information in the clipped area will be lost. Forbidden brightness
simple addition algorithm values are obtained when the brightness of the image standing in
s¼uþυ ð18Þ the numerator in Eqs. (11)–(13) is higher than that of the image in
the denominator. The positive (negative) rt– and rφ–images, Eqs.
It is clear from Eq. (18) that the maximally possible brightness (11) and (13), will be not affected by the clipping if the brightness
value, which can be obtained at the maximal values for the partial of the IR– (Vis–) image is higher than that of the Vis– (IR–) image. It
images umax ¼ 1 and υmax ¼ 1 with the s–algorithm is smax ¼ 2, whereas is understood that the places of the image which are not clipped in
pffiffiffi
for the rAmp–algorithm it is jψ jmax ¼ 2, which implies that with the the positive (negative) rt–image will be clipped in the negative (posi-
s–algorithm one achieves the clipping threshold s = 1 at lower bright- tive) rt–image and vice versa. The latter statement is illustrated in
ness values of the partial images than for the rAmp–image. Indeed, for Fig. 5 for the scene ‘Hangar’. In many cases, the clipping worsens the
example at u ¼ 0:8; υ ¼ 0:6 one has jψ j ¼ 1 while s ¼ 1:4, which image, as it is for example for the scene ‘Hangar’, whereas in some
means that at these brightness values the raw s–image will be clipped cases the clipping effect might be useful since it highlights potential
and thus should be normalized, while at the same partial brightness targets such as ships, cars on the bridge, and the engine of the flight
values the rAmp–image just reaches the maximal possible brightness on the negative rt–image of the scene ‘Guanabara Bay’ (Fig. 4). Such
jψ j ¼ 1 and, therefore, does not need the normalization yet. clipped places on the image can serve as markers for the targets. Com-
The normalization of the raw s–image usually is achieved by replac- puter lock–on, recognition, and tracking of the potential targets will be
ing the simple addition algorithm by the AF or by weighted fusion facilitated by the enhanced contrast in such images, where clipping is
(WF) algorithm. However, the normalization inevitably results in the unambiguously related to the activity of the target accompanied by the
9
Y. Khaustov et al. Results in Optics 2 (2021) 100038
infrared emission; Fig. 4 is an example. In some sense, Fig. 4 is akin to sion by zero. The singularity is escaped by biasing the denominator
a typical radar picture, where the bright spots imaging the targets are in the t– and φ–algorithms with a small parameter ε ≪ u,υ. By the con-
observed on a dark background. Such rt–images with targets being tinuous increase of ɛ one reduces the range of the brightness values for
marked due to the clipping effect might be useful at the triple fusion the phase algorithms. In Fig. 3e we demonstrated that for a full–color
of the Vis, IR, and radar images, especially on the stage of image align- partial Vis image, the increase of ε transforms the pseudo–colors of the
ment (registration), since the highly contrasted targets will serve as positive t– and φ–images, resulted from the mathematical operation of
key points, which can be superimposed with those on the radar image. division, towards natural colors of the partial full–color Vis–image,
It is worth noting that the clipping effect in the rt–images is the still preserving the details from both partial images. However, it
result of the operation of division in the rt–algorithms, Eqs. (10), should be noticed that the transformation towards the natural colors
(11), providing high local contrast. The clipping, which can be consid- is not always needed. Some fusion methods introduce pseudo–colors
ered as an undesirable effect, is accompanied by high contrast, which on purpose to enhance the visibility of a target (Huang et al., 2007).
is desirable for many tasks; the target tracking, for example. A balance The pseudo–colors, which are obtained for phase algorithms at small
between these two properties of the rt–algorithm can be achieved via ε might be preferable if they provide better visibility of the target.
the ε–normalization in Eqs. (10), (11). The amplitude and phase images should not be considered as an
alternative to each other. They are complementary forms of the same
5.2.2. The φ–algorithm fused image, each of which carries specific information. The nAmp–im-
Interestingly, the rφ–algorithm can be considered as a normalized age is the RMS analog of the AF–image. In terms of the set theory both
version of the rt–algorithm. Indeed, the inverse function arctan applied the AF– and nAmp–images are the unions of the sets of objects which
to the rt–algorithm in Eqs. (12) and (13), reduces the range of the belong to the partial Vis– and IR–images. Both the AF– and nAmp–algo-
brightness values achievable with the φ–algorithm to ½0; π=2½ (compare rithms are based on the operation of summation and, thus carry the
to ½0; 1½ for the rt–algorithm). The rφ–images appear to be less composition information about the sets of the objects combined in
affected by clipping. Smooth increase of the parameter ε in Eqs. (12) the fused image from partial images. The same concerns other fusion
and (13), diminishes the undesirable clipping. If a full–color Vis–image methods such as LPT, DWT, and other methods based on the combina-
is used as a partial image for the fusion with a monochrome IR–image, tion of the elements of the partial images into the fused image.
the ε–normalization transforms the pseudo–color rφ–image of the Contrary, the phase algorithms are based on the operation of divi-
scene ‘NAA Campus’ into the full–color nφ–image, still containing all sion and, thus, carry the relative information about the properties of
key features of both partial images. Fig. 1 shows that almost all quality the objects to reflect or emit IR light waves. The operation of divisions
indices of the nφ–algorithm are higher in comparison with those in the phase algorithms broadens the brightness range, enhancing the
obtained with the LPT, DWT, and other CFIF methods. contrast of objects which intensively emit the IR waves, thereby visu-
alizing the distribution of the temperature. For the military sightseeing
6. Concluding remarks systems, the latter can serve as the marking of potential targets (such
as, ships, cars on the bridge, and the engine of the flight in Fig. 4 which
We propose a new fusion method using a complex function as a appear as bright spots on dark background). In medical imaging high
template for the fusion of partial Vis– and IR–images (u and υ, respec- contrast on the phase images can signal the temperature anomaly. At
tively) and perform a quality assessment of the fused images. In the microscopy observations, the enormously high contrast obtained in
framework of the proposed fusion method one of the two partial the raw phase images can be used for the particle (including living
images is chosen as the real part and the other one as the imaginary cells) tracking. Importantly, in addition to the relative information,
part of the complex function ψ, which, thereby, plays a role of a com- the phase images also carry the composition information and in some
plex form of the fused image. Since any complex function can be cases, the phase images might serve as alternatives for the images
uniquely defined by its amplitude jψ j and phase φ, one arrives corre- fused by other methods.
spondingly with two groups of algorithms (amplitude and phase) for The enhancement of the quality of images fused with the phase and
the fusion of partial images. The amplitude image is calculated as amplitude algorithms is obtained due to the mathematical forms of the
the square root of the sum of squares of partial images algorithms, contrary to the LPT, DWT, and other advanced fusion
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi methods, which are based on sophisticated processing procedures.
jψ j ¼ u2 þ υ2 . The phase images can be synthesized either using
Therefore, the CFIF method pretends to be an express, real–time fusion
the t– or φ–algorithms, where t ¼ tanφ is the ratio of the partial images
method with advanced possibilities comparable to those obtained with
and consequently φ ¼ arctan½t . Since there is no restriction of which of
other modern fusion techniques.
the two functions (u or υ) has to be chosen as the real part and which
Quality indices of the phase images are governed by the operation
one has to be the imaginary part, the complex function ψ can be built
of division, which broadens the brightness range and thereby enhances
in two forms ψ pos ¼ υ þ iu or ψ neg ¼ u þ iυ. It turns out that since the
the quality indices. However, when the brightness table of the fused
Vis–image u is a positive image, whereas in most cases the IR–image
image is represented as an image on a computer display, the brightness
υ looks rather like a negative image, the phase images
values, which are higher than 1 are clipped. If an image is analyzed,
t ɛpos ¼ u=ðυ þ ɛ Þ and φɛpos ¼ arctan½u=ðυ þ ɛÞ appear to be positive
treated, or assessed by a computer program without the participation
images, whereas the images calculated by the algorithms of a human operator, there is no need to image the brightness table on
t neg ¼ υ=ðu þ ɛÞ and φneg ¼ arctan½υ=ðu þ ɛÞ look similar to the IR–im- the display and thus there will be no clipping of data in the brightness
age, resembling a negative image and bearing the correspondence table. Quality indices calculated from the brightness tables before
between the complementary brightness contrasts and complementary imaging, thereby before clipping, are much higher than those mea-
colors (if the Vis–image is a full–color image) of the conventional pos- sured after image clipping. Since known fusion methods usually do
itive and negative images. The same fully applies to the normalized not exceed the brightness range ½0; 1, the quality indices measured
versions of the phase algorithms φɛn pos ¼ ð2=π Þarctan½u=ðυ þ ɛ Þ and from the brightness tables calculated by the raw phase algorithms
φɛn neg ¼ ð2=π Þarctan½υ=ðu þ ɛ Þ. are higher than those measured for images fused by the conventional
The robustness of the proposed method is provided by the simplic- fusion methods. Processing of the brightness tables of the raw ampli-
ity of the algorithms, the result of which can be analytically predicted tude and phase images before imaging might be important for the tar-
by examining the behavior of the mathematical functions for the get–tracking problems.
amplitude and phase algorithms. The operation of division in phase The proposed CFIF method is based on simple mathematical
t– and φ–algorithms implies the mathematical singularity of the divi- expressions. Namely: the amplitude image is calculated as the square
10
Y. Khaustov et al. Results in Optics 2 (2021) 100038
root of the sum of squares of the partial images and the phase image is matrix of complex elements) can be obtained using the analogy with
calculated as the ratio (or arctan of the ratio) of the partial images. Jones matrix formalism (Jones, 1948; Azzam and Bashara, 1977;
Running times for such simple algorithms usually are short, being on Nastyshyn et al., 2018, 2019) for the description of the light wave
scales at the limit of the resolution. Nevertheless, we have checked transformation. Therefore, the approach of the complex function pre-
the running time for the amplitude and phase algorithms. For all the sents not only new fusion algorithms; it opens a way for the develop-
fused images calculated by the amplitude and phase algorithms shown ment of new matrix fusion formalism in analogy with the Jones matrix
in Table 1, we find the running time to be on the order of 10–6s. Such a formalism for light wave transformations. The corresponding studies
short running time confirms that the CFIF method is indeed an express are in progress and show promising preliminary results.
fusion method. For the images fused by the LPT and DWT methods, we
find the running times to be on the orders of 0.05 s and 0.5 s, respec- Declaration of Competing Interest
tively. According to (Ma et al., 2019) for the methods based on com-
plicated image processing procedures, the typical running time The authors declare that they have no known competing financial
ranges from 10–3s for the LPT method and 10–1s for the DWT method interests or personal relationships that could have appeared to influ-
up to several tens of seconds for the pulse coupled neural network ence the work reported in this paper.
method (Kong et al., 2014) and almost one hundred seconds for the
adaptive sparse representation (Liu and Wang, 2014). Acknowledgement
In this paper, a complex function is used as a template for the fusion
of two single Vis– and IR–images. If multiple Vis and IR images are We are grateful to the team of the SMT/COPPE/Poli/UFRJ and
available then the partial images have to be sorted into two sets: Vis IME–Instituto Militar de Engenharia within the CAPES/Pró–Defesa
and IR, respectively. Sorted multiple partial images have to be pre- Program, in a partnership with IPqM–Instituto de Pesquisa da Marinha
–fused within the given set using any desirable fusion methods avail- for the development and kind permission for free downloading of the
able in the literature or their combinations. The two functions u and partial Vis– and IR–images from the Visual and Infrared Database
υ resulted from the pre–fusion within the Vis and IR sets are then used image Fusion (Visual and Infrared Database image Fusion, 2020).
to form a complex fused image ψ, which then becomes a template for
the formation of the amplitude and phase images.
References
Finally, it is important to note, that the proposed CFIF method is
not simply one more method among many other existed fusion Malviya, A., Bhirud, S.G., 2009. Image fusion of digital images. Int. J. Rec. Trends Eng. 2
methods. The importance of the introduction of the CFIF method (3), 146–148.
is, at least, three–fold. First, the transition from the image fusion Stathaki, T. (Ed.), 2008. Image Fusion: Algorithms and Applications. Academic Press,
Amsterdam.
based on the calculus rules within the field of real numbers to that Mitchell, H.B., 2010. Image Fusion: Theories. Springer–Verlag, Techniques and
within the field of complex numbers expands the possibilities for the Applications. Berlin.
formation of fusion algorithms. For example, within the field of real Liu, Z., HongpengYin, B.F., Chaia, Y.i., 2015. A novel fusion scheme for visible and
infrared images based on compressive sensing. Opt. Commun. 335, 168–177.
numbers, the operation of the addition implies only one principal https://doi.org/10.1016/j.optcom.2014.07.093.
fusion result, namely by the weighted fusion (WF) algorithm. The Cui, G., Feng, H., Zhihai, X.u., Li, Q.i., Chen, Y., 2015. Detail preserved fusion of visible
two other algorithms (the average fusion and simple addition) are and infrared images using regional saliency extraction and multi–scale image
decomposition. Opt. Commun. 341, 199–209. https://doi.org/10.1016/j.
particular cases of the WF algorithm. Within the field of complex optcom.2014.12.032.
numbers, the operation of addition provides at least two principally Miao, Q.–G., Shi, C., Peng–fei, X.u., Yang, M., Shi, Y.–b., 2011. A novel algorithm of
different algorithms, namely: the amplitude algorithm and the phase image fusion using shearlets. Opt. Commun. 284 (6), 1540–1547. https://doi.org/
10.1016/j.optcom.2010.11.048.
algorithms (positive and negative). In turn, the two latter (positive Ma, J., Ma, Y., Li, C., 2019. Infrared and visible image fusion methods and applications:
and negative phase algorithms) split into four algorithms calculated A survey. Information Fusion. 45, 153–178.
either as the phase or the tangent of the phase. The tangent of the Li, S., Kang, X., Hu, J., 2013. Image fusion with guided filtering. IEEE Trans. Image
Process. 22 (7), 2864–2875.
phase is the ratio of the partial images. Therefore, the complex form
Yang, B., Li, S., 2014. Visual attention guided image fusion with sparse representation.
of the fused image provides algorithms based on the root mean Op– tik–Int. J. Light Electron Optics 125 (17), 4881–4888.
square (RMS) operation and the ratio of the two partial images. Liu, Y., Liu, S., Wang, Z., 2015. A general framework for image fusion based on
As we have mentioned above in this section these two principally multi–scale transform and sparse representation. Inf. Fusion 24, 147–164.
D.P. Bavirisetti, G. Xiao, G. Liu, Multi–sensor image fusion based on fourth order partial
different algorithms (amplitude and phase) bring to the fused image differential equations, in: International Conference on Information Fusion, 2017,
the information of two different independent types: the composition pp. 1–9 .
information via the RMS amplitude algorithm and relative informa- Bavirisetti, D.P., Dhuli, R., 2016. Two–scale image fusion of visible and infrared images
using saliency detection. Infrared Phys. Technol. 76, 52–64.
tion via the operation of division in the phase algorithms. Khaustov, Ya. Ye., Khaustov, D. Ye., Lychkovskyy, E., et al, 2019. Image fusion for a
Second, the choice of one of the partial images as the real part and target sightseeing system of armored vehicles. Military Technical Collection 21,
the other one as the imaginary part of the complex function accounts 28–37. https://doi.org/10.33577/2312–4458.21.2019.28–37.
Khaustov, Ya. Ye., Khaustov, D. Ye., Nastishin, Yu. A., et al, 2019b. Current state and
for the fact that the two partial images (Vis and IR) are of physically prospects of development of sightseeing complexes of armored armament. Military
different origin and should not be simply added, because they belong Technical Collection 20, 48–57. https://doi.org/10.33577/
to two different independent sets of images, though they describe the 2312–4458.20.2019.48–57.
Khaustov, Ya. Ye., Khaustov Ye., D. Ye., Lychkovskyy, E., et al, 2020. Fusion of visible
same observation scene. In our opinion, because of the different ori- and infrared images via complex function. Military Technical Collection 22, 20–31.
gins of the Vis and IR images, their fusion based on the mathematical https://doi.org/10.33577/2312–4458.22.2020.20–31.
operations is more correct within the field of complex numbers. Kong, W., Zhang, L., Lei, Y., 2014. Novel fusion method for visible light
and infrared images based on nsst–sf–pcnn. Infrared Phys. Technol. 65,
Third, in this paper, we have designed the fused image in the form
103–112.
of a scalar complex function. In principle, one can also form a complex Hryvachevskyi A. P. Improvement of the informativity of multispectral monitoring
vector ! ψ 0 ¼ ½u; iυtr of two independent Vis and IR images (the super- systems by fusion of Images from visible and infrared ranges. Dissertation. Lviv
(Ukraine): Lviv Polytechnic National University (2018), https://lpnu.ua/research/
script tr denotes the transpose operation, which implies that the expres- disscoun/d–3505210/gryvachevskyy–andriy–petrovych.
sion on the right–hand side is a vector–column). In such a case, there is Bogdanov P., Romanov Yu.N. Quality assessment of digital images. Mechanics, control
a formal analogy of the ! ψ 0 –vector with the representation of the com- and informatics. 9, 218–226 (2012), https://www.elibrary.ru/item.asp?
id=20901410.
plex electric field vector of the elliptically polarized light wave. Conse- Bondarenko, M.A., Drynkin, V.N., Nabokov, C.A., Pavlov, Yu.V., 2017. Adaptive
quently, a transformed complex vector ! ψ ¼ J!ψ 0 (where J is a 2 2 algorithm for selecting informative channels in onboard multispectral video
11
Y. Khaustov et al. Results in Optics 2 (2021) 100038
systems. Prog. Syst. Comput. Meth. 1, 46–52. https://doi.org/10.7256/ Li G., Wang K. Merging Infrared and Color Visible Images with a Contrast Enhanced
2454–0714.2017.1.21952, URL: https://nbpublish.com/library_read_article.php? Fusion Method. Proc. of SPIE 6571. Multisensor, Multisource Information Fusion:
id=21952. Architectures, Algorithms, and Applications. 657108, 1–12 (2007), https://doi.org/
Xydeas, C.S., Petrović, V., 2000. Objective Image Fusion Performance Measure. 10.1117/12.720792.
Electron. Lett. 36 (4), 308–309. Huang, Guanghua, Ni, Guoqiang, Zhang, Bin, 2007. Visual and infrared dual–band false
Cvejic, N.A., Loza, D., Bul, N., 2005. Similarity Metric for Assessment of Image Fusion color image fusion method motivated by Land’s experiment. Opt. Eng. 46 (2).
Algorithms. Int’l J. Signal Processing. 2 (3), 178–182. https://doi.org/10.1117/1.2709851. 027001.
Wang, Z., Bovik, A.C., 2002. A universal image quality index. IEEE Signal Process Lett. 9 Liu, Y., Wang, Z., 2014. Simultaneous image fusion and denoising with adaptive sparse
(3), 81–84. representation. IET Image Proc. 9 (5), 347–357.
Visual and Infrared Database image Fusion, provided for free downloading by the SMT/ Jones, R.C., 1948. A New Calculus for the Treatment of Optical Systems VII. Properties
COPPE/Poli/UFRJ and IME–Instituto Militar de Engenharia within the CAPES/ of the N–Matrices. Journal of the Optical Society of America. 38 (8), 671.
Pró–Defesa Program, in a partnership with IPqM–Instituto de Pesquisa da Marinha Azzam, R.M.A., Bashara, N.M., 1977. Ellipsometry and Polarized Light. North–Holland
at http://www02.smt.ufrj.br/~fusion/. Accessed 31July 2020. Amsterdam, New York.
Peng, C., Turiv, T., Guo, Y., Wei, Q.H., Lavrentovich, O.D., 2016. Command of active Nastyshyn, S. Yu., Bolesta, I.M., Tsybulia, S.A., Lychkovskyy, E., Yakovlev, M. Yu.,
matter by topological defects and patterns. Science 354 (6314), 882–885. Ryzhov, Ye., Vankevych, P.I., Nastishin, Yu. A., 2018. Differential and integral Jones
Chung, K.–L., Lin, Y.–R., Huang, Y., 2009. Efficient shadow detection of color aerial matrices for a cholesteric. Phys. Rev. A 97, (2018) 053804.
images based on successive thresholding scheme. IEEE Trans. Geosci. Remote Sens. Nastyshyn, S. Yu., Bolesta, I.M., Tsybulia, S.A., Lychkovskyy, E., Fedorovych, Z. Ya.,
47, 671–682. Khaustov, D. Ye., Ryzhov, Ye., Vankevych, P.I., Nastishin, Yu. A., 2019. Optical
Tu, T.–M., Su, S.–C., Shyu, H.–C., Huang, P.S., 2001. A new look at IHS–like image spatial dispersion in terms of Jones calculus. Phys. Rev. A 100 (2019). https://doi.
fusion methods. Inf. Fusion. 2, 177–186. org/10.1103/PhysRevA.100.013806. 013806.
12