Image Transform and Image Fusion Techniques
Image Transform and Image Fusion Techniques
❖ This pixel distribution is determined by the absorption/reflection spectra of the imaged material. This clustering of
the pixels is termed the data structure (Crist and Kauth, 1986).
❖ The principal axes of this data structure are not necessarily aligned with the axes of the data space (defined as the
bands of the input image). They are more directly related to the absorption spectra.
❖ For viewing purposes, it is advantageous to rotate the N-dimensional space such that one or two of the data
structure axes are aligned with the Viewer X and Y axes.
❖ In particular, you could view the axes that are the largest for the data structure produced by the absorption peaks
of special interest for the application.
Colour space Transform
• To describe the visually perceived colour of an image, instead of using RGB components, sometimes we use
hue, saturation, and intensity (HSI or IHS) for a subjective sensation of colour, colour purity, and brightness,
respectively
• Images in red–green–blue (RGB) color space need to be transformed to other color spaces for image
processing or analysis.
• For example, the well-known hue-saturation-intensity (HSI) color space, which separates hue from saturation
and intensity and is similar to the color perception of humans, can aid many computer vision applications.
• For high-dimensional images, such as multispectral or hyperspectral images, the transformation images to a
color space that can separate hue from saturation and intensity would be useful
• The literature proposes many IHS transformation algorithms, which have been developed for converting the
RGB values. Some are also named HSV (hue, saturation, value) or HLS (hue, luminance/lightness,
saturation). While the complexity of the models varies, they produce similar values for hue and saturation
(a) RGB color cube and (b) HSI hexagon model
Fourier transform
❖ Fourier transform is a linear transformation that allows calculation of the coefficients necessary for the sine
and cosine terms to adequately represent the image
❖ Fourier transformations are typically used for the removal of noise such as striping, spots, or vibration in
imagery by identifying periodicities (areas of high spatial frequency)
❖ Fast Fourier Transform (FFT), a classical image filtering technique, is used to convert a raster image from
the spatial domain into a frequency domain image
Image Fusion
✓ Image fusion is the process of combining relevant information from two or more images into a single image.
✓ The image fusion is a novel method for combining spectral information of coarse resolution image with finer
spatial resolution image. The resulting merged image is a product that synergistically integrates the
information provided by various sensors or by the same sensor (Simone et al., 2002), which may be found
useful for human visual perception, provides faster interpretation and can help in extracting more features
(Wen and Chen, 2004)
✓ Further, it also helps in sharpening the images, improving geometric corrections, enhancing certain features
that are not visible in either of the images, replacing the defective data, and complementing the data sets for
better decision-making.
✓ The advantages of image fusion include image sharpening, feature enhancement, improved classification,
and the creation of stereo data sets. Multisensor image fusion provides benefits in terms of the range of
operation, spatial and temporal characteristics, system performance, reduced ambiguity, and improved
reliability
✓ Based on the processing levels, image fusion techniques can be divided into different categories. These are
pixel level, feature level, and symbol level/decision level
✓ The fusion of two data sets can be done in order to obtain one single data set with the qualities of both (Saraf,
1999). For example, the low-resolution multispectral satellite imagery can be combined with the higher resolution
radar imagery by fusion technique to improve the interpretability of fused/merged images.
✓ The resultant data product has the advantages of high spatial resolution, structural information (from radar
images), and spectral resolution (from optical and infrared bands). Thus, with the help of all this cumulative
information, the analyst can explore most of the linear and anomalous features as well as lithologies.
➢ The R, G, and B bands of the multispectral image are transformed into HIS components, replacing the intensity
component with the pan image, and performing the inverse transformation to obtain a high spatial resolution
multispectral image.
➢ HIS can enhance spatial details of the multispectral image and improve the textural characteristics of the fused, but
the fusion image exist serious spectral distortion.
➢ The HIS transform is used for geologic mapping because the IHS transform could allow diverse forms of spectral
and spatial landscape information to be combined into a single data set for analysis.
Although the HIS method has been widely used, the method cannot
decompose an image into different frequencies in frequency space such as
higher or lower frequency. Hence the IHS method cannot be used to
enhance certain image characteristics. The color distortion of HIS technique
is often significant. To reduce the color distortion the PAN image is matched
to the intensity component are stretching before the reverse transform.
▪ PCA Transform Fusion
➢ It is a simple method for combining data from multiple sensors with the limitation that only three bands are
involved.
➢ Its purpose is to normalize the three multispectral bands used for RGB display and to multiply the result by any
other desired data to add the intensity or brightness component to the image.
➢ Consequently, the Brovey Transform should not be used if preserving the original scene radiometry is important.
However, it is good for producing RGB images with a higher degree of contrast in the low and high ends of the
image histogram and for producing visually appealing images.
➢ Since the Brovey Transform is intended to produce RGB images, only three bands at a time should be merged
from the input multispectral scene, such as bands 3, 2, 1 from a SPOT or Landsat TM image or 4, 3, 2 from a
Landsat TM image.
➢ The resulting merged image should then be displayed with bands 1, 2, 3 to RGB.