SlideShare a Scribd company logo
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
DOI : 10.5121/ijist.2012.2607 73
ADOPTING AND IMPLEMENTATION OF
SELF ORGANIZING FEATURE MAP FOR
IMAGE FUSION
Dr.Anna Saro Vijendran1
and G.Paramasivam2
1
Director, SNR Institute of Computer Applications, SNR Sons College, Coimbatore,
Tamilnadu, INDIA.
saroviji@rediffmail.com
2
Asst.Professor, Department of Computer Applications, SNR Sons College, Coimbatore
Tamilnadu, INDIA.
pvgparam@yahoo.co.in
ABSTRACT
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming
to produce quality images. Image Fusion is to integrate complementary and redundant information from
multiple images of the same scene to create a single composite image that contains all the important
features of the original images. The resulting fused image will thus be more suitable for human and
machine perception or for further image processing tasks. The existing fusion techniques based on either
direct operation on pixels or segments fail to produce fused images of the required quality and are mostly
application based. The existing segmentation algorithms become complicated and time consuming when
multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self
organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to
produce multiple slices of the source and reference images based on various combination of gray scale and
can dynamically fused depending on the application. The proposed technique is adopted and analyzed for
fusion of multiple images. The technique is robust in the sense that there will be no loss in information due
to the property of Self Organizing Feature Maps; noise removal in the source images done during
processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental
results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than
some popular image fusion methods in both subjective and objective qualities.
KEYWORDS
Image Fusion, Image Segmentation, Self Organizing Feature Maps, Code Book Generation, Multifocus
Images, Gray Scale Images
1. INTRODUCTION
Nowadays, image fusion has become an important subarea of image processing. For one object or
scene, multiple images can be taken from one or multiple sensors.These images usually contain
complementary information. Image fusion is the process of combining information from two or
more images of a scene into a single composite image that is more informative and is more
suitable for visual perception or computer processing. The objective in image fusion is to reduce
uncertainty and minimize redundancy in the output while maximizing relevant information
particular to an application or task. Image fusion has become a common term used within medical
diagnostics and treatment. Given the same set of input images, different fused images may be
created depending on the specific application and what is considered relevant information. There
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
74
are several benefits in using image fusion like wider spatial and temporal coverage, decreased
uncertainty, improved reliability and increased robustness of system performance. Often a single
sensor cannot produce a complete representation of a scene. Successful image fusion significantly
reduces the amount of data to be viewed or processed without significantly reducing the amount
of relevant information.
Image fusion algorithms can be categorized into pixel, feature and symbolic levels. Pixel-level
algorithms work either in the spatial domain [1, 2] or in the transform domain [3, 4 and 5].
Although pixel-level fusion is a local operation, transform domain algorithms create the fused
image globally. By changing a single coefficient in the transformed fused image, all image values
in the spatial domain will change. As a result, in the process of enhancing properties in some
image areas, undesirable artifacts may be created in other image areas. Algorithms that work in
the spatial domain have the ability to focus on desired image areas, limiting change in other areas.
Multiresolution analysis is a popular method in pixel-level fusion. Burt [6] and Kolczynski [7]
used filters with increasing spatial extent to generate a sequence of images from each image,
separating information observed at different resolutions. Then at each position in the transform
image, the value in the pyramid showing the highest saliency was taken. An inverse transform of
the composite image was used to create the fused image. In a similar manner, various wavelet
transforms can be used to fuse images. The discrete wavelet transform (DWT) has been used in
many applications to fuse images [4]. The dual-tree complex wavelet transforms (DT-CWT),
first proposed by Kingsbury[8], was improved by Nikolov [9] and Lewis [10] to outperform most
other gray-scale image fusion methods.
Feature-based algorithms typically segment the images into regions and fuse the regions using
their various properties [10–12]. Feature-based algorithms are usually less sensitive to signal-
level noise [13]. Toet [3] first decomposed each input image into a set of perceptually
relevant patterns. The patterns were then combined to create a composite image containing all
relevant patterns. A mid-level fusion algorithm was developed by Piella [12, 15] where the
images are first segmented and the obtained regions are then used to guide the multiresolution
analysis.
Recently methods have been proposed to fuse multifocus source images using the divided blocks
or segmented regions instead of single pixels [16, 17, and 18]. All the segmented region-based
methods are strongly dependent on the segmentation algorithm. Unfortunately, the segmentation
algorithms, which are of vital importance to fusion quality, are complicated and time-consuming.
The common transform approaches for fusion of mutifocus images include the discrete wavelet
transform (DWT) [19],curvelet transform [20] and non subsampling contourlet transform (NSCT)
[21]. Recently, a new multifocus image fusion and restoration algorithm based on the sparse
representation has been proposed by Yang and Li [22]. A new multifocus image fusion
method based on homogeneity similarity and focused regions detection has been proposed during
the year 2011 by Huafeng Li , Yi Chai , Hongpeng Yin and Guoquan Liu [23] .
Most of the traditional image fusion methods are based on the assumption that the source images
are noise free, and they can produce good performance when the assumption is satisfied. For the
traditional noisy image fusion methods, they usually denoise the source images, and then the
denoised images are fused. The multifocus image fusion and restoration algorithm proposed by
Yang and Li [22] performs well with both noisy and noise free images, and outperforms
traditional fusion methods in terms of fusion quality and noise reductionin the fused output.
However, this scheme is complicated and time-consuming especially when the source images are
noise free. The image fusion algorithm based on homogeneity similarity proposed by HuafengLi ,
Yi Chai , Hongpeng Yin and Guoquan Liu [23] aims at solving the fusion problem of clean and
noisy multifocus images. Further in any region based fusion algorithm, the fusion results are
affected by the performance of segmentation algorithm. The various segmentation algorithms are
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
75
based on thresholding and clustering but the partition criteria used by these algorithms often
generates undesired segmented regions.
In order to overcome the above said problems a new method for segmentation using Self-
organizing Feature Maps which consequently helps in fusion of images dynamically to the
desired degree of information retrieval depending on the application has been proposed in this
paper. The proposed algorithm is compatible for any type of image either noisy or clean. The
method is simple and since mapping of image is carried out by Self-organizing Feature Maps all
the information in the images will be preserved. The images used in image fusion should already
be registered. A novel image fusion algorithm based on self organizing feature map is proposed in
this paper.
The outline of this paper is as follows: In Section 2, the Self-organizing Feature Maps is briefly
introduced. Section 3 describes the algorithm for Code Book generation using Self Organizing
Feature Maps. Section 4 describes the proposed method of Fusion. Section 5 details the
Experimental Analysis and Section 6 gives the conclusion of this paper.
2. SELF-ORGANIZING FEATURE MAP
Self-organizing Feature Map (SOM) is a special class of Artificial Neural Network based on
competitive learning. It is an ingenious Artificial Neural Network built around a one or two-
dimensional lattice of neurons for capturing the important features contained in the input. The
Kohonen technique creates a network that stores information in such a way that any topological
relationships within the training set are maintained. In addition to clustering the data into distinct
regions, regions of similar properties are put into good use by the Kohonen maps.
The primary benefit is that the network learns autonomously without the requirement that the
system be well defined. System does not stop learning but instead continues to adapt to changing
inputs. This plasticity allows it to adapt as the environment changes. A particular advantage over
other artificial neural networks is that the system appears well suited to parallel computation.
Indeed the only global knowledge required by each neuron is the current input to the network
and the position within the array of the neuron which produced the maximum output
Kohonen networks are grid of computing elements, which allows identifying the immediate
neighbours of a unit. This is very important, since during learning, the weights of computing units
and their neighbours are updated. The objective of such a learning approach is that neighbouring
units learn to react to closely related signals.
A Self-organizing Feature Map does not need a target output to be specified unlike many other
types of network. Instead, where the node weights match the input vector, that area of the lattice
is selectively optimized to more closely resemble the data for the class, the input vector is a
member. From an initial distribution of random weights, and over many iterations, the Self-
organizing Feature Map eventually settles into a map of stable zones. Each zone is effectively a
feature classifier. The output is a type of feature map of the input space. In the trained network,
the blocks of similar values represent the individual zones. Any new, previously unseen input
vectors presented to the network will stimulate nodes in the zone with similar weight vectors.
Training occurs in several steps and over many iterations.
Each node's weights are initialized. A vector is chosen at random from the set of training data and
presented to the lattice. Every node is examined to calculate which one's weights are most like the
input vector. The winning node is commonly known as the Best Matching Unit (BMU). The
radius of the neighbourhood of the Best Matching Unit is now calculated. This is a value that
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
76
starts large, typically set to the 'radius' of the lattice, but diminishes each time-step. Any nodes
found within this radius are deemed to be inside the Best Matching Unit‘s neighbourhood. Each
neighbouring node's weights are adjusted to make them more like the input vector. The closer a
node is to the Best Matching Unit; the more its weights get altered. The procedure is repeated for
all input vectors for number of iterations. Prior to training, each node's weights must be
initialized. Typically these will be set to small-standardized random values. To determine the
Best Matching Unit, one method is to iterate through all the nodes and calculate the Euclidean
distance between each node's weight vector and the current input vector. The node with a weight
vector closest to the input vector is tagged as the Best Matching Unit. After the Best Matching
Unit has been determined, the next step is to calculate which of the other nodes are within the
Best Matching Unit's neighbourhood. All these nodes will have their weight vectors altered in the
next step. A unique feature of the Kohonen learning algorithm is that the area of the
neighbourhood shrinks over time to the size of just one node.
After knowing the radius, iterations are carried out through all the nodes in the lattice to
determine if they lay within the radius or not. If a node is found to be within the neighbourhood
then its weight vector is adjusted. Every node within the Best Matching Unit‘s neighbourhood
(including the Best Matching Unit) has its weight vector adjusted.
In Self-organizing Feature Map, the neurons are placed at the lattice nodes; the lattice may take
different shapes: rectangular grid, hexagonal, even random topology .
Figure 1. Self Organizing Feature Map Architecture
The neurons become selectivity tuned on various input patterns in the course of competitive
learning process. The locations of the neurons (i.e. the winning neurons) so tuned, tend to become
ordered with respect to each other in such a way that a meaningful coordinate system for different
input features to be created over the lattice.
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
77
SOM Neural Network Training GUI:
3. CODE BOOK GENERATION USING SELF-ORGANIZING
FEATURE MAP
Given a two-dimensional input image pattern to be mapped onto a two dimensional spatial
organization of neurons located at different positions (i, j)’s on a rectangular lattice of size nxn.
Thus, for a set of nxn points on the two-dimensional plane, there would be n2
neurons Nij:1 ≤ i,j ≤
n, and for each neuron Nij there is an associated weight vector denoted as Wij. In Self-organizing
Feature Map, the neuron with minimum distance between its weight vector Wij and the input
vector X is the winner neuron (k,l), and it is identified using the following equation.
||X - W kl|| = min [min ||X - W ij ||]
1≤i≤ n i≤j≤ n (1)
After the position of the (i,j)th winner neuron is located in the two-dimensional plane, the winner
neuron and its neighbourhood neurons are adjusted using Self- organizing Feature Map learning
rule as:
Wij(t+1) = Wij(t) + α || X -Wij(t) || (2)
Where, α is the Kohonen’s learning rate to control the stability and the rate of convergence. The
winner weight vector reaches equilibrium when Wij(t+1)=Wij(t). The neighbourhood of neuron
Nij is chosen arbitrary. It can be a square or a circular zone around Nij of arbitrary chosen radius.
Algorithm
1: The image A (i,j) of size 2 N
x 2 N
is divided into blocks, each of them of size
2 n
x 2n
pixels, n < N.
2: A Self-organizing Feature Map Network is created with a codebook consisting
of M
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
78
neurons (mi: i=1,2,.....M). The total M neurons are arranged in a hexagonal
lattice, and for each neuron there is an associated weight vector Wi = [wi1
wi2.......wi2
2n
].
3: The weights vectors are initiated for all the neurons in the lattice with small
random
values.
4: The learning input patterns (image blocks) are applied to the network. The
Kohonen’s competitive learning process identifies the winning neurons that best
match the input blocks. The best matching criterion is the minimum Euclidean
distance between the vectors. Hence, the mapping process Q that identifies the
neuron that best matches the input block X is determined by applying the
following condition.
Q(X) = arg i min || X - Wi || i = 1,2,……..M (3)
5: At equilibrium, there are m winner neurons per block or m codewords per block.
Hence, the whole image is represented using m number of codewords.
6: The indices of the obtained codewords are stored. The set of indices of all winner
neurons along with the codebook are stored.
7: The reconstructed image blocks of same size as the original ones, will be restored
from the indices of the codewords.
4. PROPOSED METHOD OF FUSION
Let us consider two pre-registered grayscale 8 bit images A and B formed of the same scene or
object.
The first image A is decomposed into sub-images and given as input to the Self-organizing
Feature Map Neural Network. In order to preserve all the gray values of the image, the codebook
size for compressing an 8 bit image is chosen to be the maximum possible number of gray levels,
say 256. Since the weight values after training has to represent the input level gray levels, random
values ranging from 0 to 255 are assigned as initial weights. When sub-images of size say 4 x 4 is
considered as the input vector, then there will be 16 nodes in the input layer and the Kohonen
layer consists of 256 nodes arranged in a 16 x 16 array. The input layer takes as input the gray-
level values from all the 16 pixels of the gray-level block. The weights assigned between node j
of the Kohonen layer and the input layer represents the weight matrix. For all the 256 nodes we
have Wji for j = 0,1,…,255 and i = 0,1…15. Once the weights are initialized randomly
network is ready for training.
The image block vectors are mapped with the weight vectors. The neighborhood is initially
chosen; say 5 x 5and then reduced gradually to find the best matching node. The Self-
organizing Feature Map generates the codebook according to the weight updation. The set of
indices of all the winner neurons for the blocks along with the code book are stored for retrieval.
The image A is retrieved by generating the weight vectors of each neuron from the index values,
which gives the pixel value of the image. For each index value the connected neuron is found.
The weight vector of that neuron to the input layer neuron is generated. The values of the neuron
weights are the gray level for the block. The gray level value thus obtained is displayed as pixels.
Thus we get the image A back in its original form.
Now the Image B is given as input to the Neural Network. Since the features in Images A and B
are the same, when B is given as input to the trained Network the code book for the Image B will
be generated in minimum simulation time without loss in information. Also since the images are
registered the index values which represent the position of pixels will be the same in the two
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
79
images. For the same index value the value of the neuron weights of both the images are
compared and the higher value is selected. The procedure is repeated for all the indices until a
weight vector of optimal strength is generated. The image retrieved from this optimal weight
vector is the fused image which will represent all the gray values in its optimal values. The
procedure can be repeated in various combinations with multiple images until the desired result is
achieved.
5. EXPERIMENTAL ANALYSIS AND RESULTS
The Experimental analysis of the proposed algorithm has been performed using large numbers of
images having different content for simulations and different objective parameters discussed in
the paper. In order to evaluate the fusion performance, the first experiment is performed on one
set of perfectly registered multifocus source images. The dataset consists of multifocus images.
The use of different pairs of multifocus images of different scenes including all categories of text,
text+object and only objects allows evaluating the proposed algorithm in true sense. The
proposed algorithm is simulated in Matlab7. The proposed algorithm is evaluated based on the
quality of the fused image obtained. The robustness of the proposed algorithm that is to obtain
consistent good quality fused image with different categories of images like standard images,
medical images, satellite images has been evaluated. The average computational time to generate
final fused image from the source image size of 128 x 128 using the proposed algorithm has been
calculated and the average computational time is 96 seconds. The quality of the fused image with
the source images has been compared in terms of RMSE and PSNR values. The experimental
results obtained for fusion of grayscale images adopting the proposed algorithm are shown in
Table 1. The source multifocus images and the fused images of different types are shown in
Figures 1 to 4. The histogram and image difference for lena and bacteria images are shown in
Figure 5 and 6 respectively.
Table 1.a
Images
RMSE
Image A Image B
Fused Image
Lena 6.2856 6.0162 3.3205
Bacteria 6.8548 6.421 6.227
Satellite map 4.0043 5.325 3.8171
MRI -Head 4.3765 4.9824 1.5163
Table 1.b
Images
PSNR
Image A Image B
Fused Image
Lena 30.381 31.701 38.0481
Bacteria 28.194 27.989 32.5841
Satellite map 32.077 31.582 33.2772
MRI -Head 30.787 33.901 45.3924
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
80
Figure 1.a Figure 1.b Figure 1.c
Figure 1.Lena
Figure 2.a Figure 2.b Figure 2.c
Figure 2.Bacteria
Figure 3.a Figure 3.b Figure 3.c
Figure 3. Satellite Map
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
Figure 4.a
Images (a) and (b ) are the multi focused images
Image (c) is the fused version of (a)and(b)
Figure 5. Histogram for lena and bacteria images
Figure 6. The Difference images between
between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
4.a Figure 4.b Figure 4.c
Figure 4.MRI –Head
Images (a) and (b ) are the multi focused images
Image (c) is the fused version of (a)and(b)
Figure 5. Histogram for lena and bacteria images
Figure 6. The Difference images between fused image and the source image :(g) Difference
between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
Difference between (e)and (f).
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
81
fused image and the source image :(g) Difference
between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
82
6. CONCLUSIONS
In this Paper a simple method of fusion of Images has been proposed. The advantage of Self
organizing Feature Map is that after training, the weight vectors not only represents the image
block cluster centroids but also preserves two main features. Topologically neighbour blocks in
the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also
the distribution of weight vectors of the neurons reflects the distribution of the weight vectors in
the input space. Hence their will not be any loss of information in the process. The proposed
method is dynamic in the sense that depending on the application the optimal weight vectors are
generated and also redundant values as well as noise can be ignored in this process resulting in
lesser simulation time. The method can be extended to colour images also.
REFERENCES
[1] S. Li, J.T. Kwok, Y. Wang, Using the discrete wavelet frame transform to merge Landsat
TM and SPOT panchromatic images, Information Fusion 3 (2002) 17–23.
[2] A. Goshtasby, 2-D and 3-D Image Registration for Medical, Remote Sensing, and Industrial
Applications, Wiley Press,2005.
[3] A. Toet, Hierarchical image fusion, Machine Vision and Applications 3 (1990) 1–11.
[4] H. Li, S. Manjunath, S. Mitra, Multisensor image fusion using the wavelet transform, Graphical
Models and Image Processing 57 (3) (1995) 235–245.
[5] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, M. Halliwell, and P.N.T. Wells. Image fusion using a 3-d
wavelet transform. In Proc. 7th International Conference on Image Processing And Its Applications,
pages 235-239, 1999.
[6] P.J. Burt, A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis, Springer-
Verlag, Berlin, 1984, pp. 6–35.
[7] P.J. Burt, R.J. Kolczynski, Enhanced image capture through fusion, in International Conference on
Computer Vision, 1993, pp. 173–182.
[8] N. Kingsbury, Image processing with complex wavelets, Silverman, J. Vassilicos (Eds.),
Wavelets: The Key to Intermittent Information, Oxford University Press, 1999, pp.165–185.
[9] S.G. Nikolov, P. Hill, D.R. Bull, C.N. Canagarajah, Wavelets for image fusion, in: A.
Petrosian, F. Meyer (Eds.), Wavelets in Signal and Image Analysis, Kluwer Academic
Publishers, The Netherlands, 2001, pp. 213–244.
[10] J.J. Lewis, R.J. O’Callaghan, S.G. Nikolov, D.R. Bull, C.N. Canagarajah, Region-based
image fusion using complex wavelets, in: Proceedings of the 7th
International Conference on
Information
Fusion, Stockholm, Sweden, June 28–July 1, 2004, pp. 555–562.
[11] Z. Zhang, R. Blum, Region-based image fusion scheme for concealed weapon detection, in:
ISIF Fusion Conference, Annapolis, MD, July 2002.
[12] G. Piella, A general framework for multiresolution image fusion: from pixels to regions,
Information Fusion 4 (2003) 259–280.
[13] G. Piella, A region-based multiresolution image fusion algorithm, Proceedings of the 5th
International Conference on Information Fusion, Annapolis, MS, July 8–11, 2002, pp. 1557–1564.
[14] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, 2-D image fusion by multiscale edge graph
combination, in: Proceedings of the 3rd
International Conference on Information Fusion,
Paris, France, July 10–13, 1, 2000, pp.16–22.
[15] G. Piella, A general framework for multiresolution image fusion from pixels to regions,
Information Fusion 4 (2003) 259–280.
[16] H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493.
[17] S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985.
[18] V. Aslanta, R. Kurban, Expert Systems with Applications 37 (12) (2010) 8861.
[19] Y. Chai, H.F. Li, M.Y. Guo, Optics Communications 248 (5) (2011) 1146.
[20] S.T. Li, B. Yang, Pattern Recognition Letters 29 (9) (2008) 1295
[21] Q. Zhang, B.L. Guo, Signal Processing 89 (2009) 1334.
[22] B. Yang, S.T. Li, IEEE Transactions on Instrumentation and Measurement 59 (4) (2010) 884.
International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012
83
[23] Multifocus image fusion and denoising scheme based on homogeneity similarity Huafeng Li , Yi
Chai , Hongpeng Yin, Guoquan Liu , Optics Communications Journal , September 2011.
ACKNOWLEDGMENTS
The authors thanks to our Management of SNR Sons Institutions for allowing us to utilize their
resources for doing our research work. Our sincere thanks to our Principal & Secretary
Dr.H.Balakrishnan M.Com., M.Phil.,Ph.D., for his support and encouragement in our research
work. And my grateful thanks to my guide Dr. Anna Saro Vijendran MCA.,M.Phil.,Ph.D.,
Director of MCA., SNR Sons College, Coimbatore-6 for her valuable guidance and giving me a
lot of suggestions & proper solutions for critical situations in our research work. The authors
thank the Associate Editor and reviewers for their encouragement and valued comments, which
helped in improving the quality of the paper.
AUTHOR’S BIOGRAPHY
Dr. Anna Saro Vijendran received the Ph.D. degree in Computer Science from
Mother Teresa Womens University, Tamilnadu, India, in 2009. She has 20 years of
experience in teaching. She is currently working as the Director, MCA in SNR Sons
College, Coimbatore, Tamilnadu, India. She has presented and published many papers
in International and National conferences. She has authored and co-authored more than
30 refereed papers. Her professional interests are Image Processing, Image fusion,
Data mining and Artificial Neural Networks.
G.Paramasivam Ph.D (PT) Research scholar. He is currently the Asst Professor, SNR
Sons College, Coimbatore, Tamilnadu, India. He has 10 years of teaching experience.
His technical interests include Image Fusion and Artificial Neural networks.
Ad

Recommended

ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ijistjournal
 
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ijistjournal
 
Image Fusion a survey Image Fusion a survey
Image Fusion a survey Image Fusion a survey
MrsSGraciaNissiCSE1
 
F010224446
F010224446
IOSR Journals
 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
IJCI JOURNAL
 
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
IRJET Journal
 
Image Fusion and Image Quality Assessment of Fused Images
Image Fusion and Image Quality Assessment of Fused Images
CSCJournals
 
IRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET Journal
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
INFOGAIN PUBLICATION
 
Property based fusion for multifocus images
Property based fusion for multifocus images
IAEME Publication
 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819
Iman Roosta
 
Dd25624627
Dd25624627
IJERA Editor
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
acijjournal
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
acijjournal
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
IJMER
 
PCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image Fusion
IJMTST Journal
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and Development
IJERD Editor
 
E1083237
E1083237
IJERD Editor
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
IOSR Journals
 
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
ijsc
 
A Novel Color Image Fusion for Multi Sensor Night Vision Images
A Novel Color Image Fusion for Multi Sensor Night Vision Images
Editor IJCATR
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
IJSRD
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image Fusion
IJERA Editor
 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image Fusion
IOSRJVSP
 
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
ijsc
 
Quality Assessment of Gray and Color Images through Image Fusion Technique
Quality Assessment of Gray and Color Images through Image Fusion Technique
IJEEE
 
Review on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid technique
IRJET Journal
 
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...
Editor IJCATR
 
A SURVEY OF BIG DATA ANALYTICS..........
A SURVEY OF BIG DATA ANALYTICS..........
ijistjournal
 
7th International Conference on Machine Learning & Applications (CMLA 2025)
7th International Conference on Machine Learning & Applications (CMLA 2025)
ijistjournal
 

More Related Content

Similar to ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION (20)

RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
INFOGAIN PUBLICATION
 
Property based fusion for multifocus images
Property based fusion for multifocus images
IAEME Publication
 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819
Iman Roosta
 
Dd25624627
Dd25624627
IJERA Editor
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
acijjournal
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
acijjournal
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
IJMER
 
PCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image Fusion
IJMTST Journal
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and Development
IJERD Editor
 
E1083237
E1083237
IJERD Editor
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
IOSR Journals
 
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
ijsc
 
A Novel Color Image Fusion for Multi Sensor Night Vision Images
A Novel Color Image Fusion for Multi Sensor Night Vision Images
Editor IJCATR
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
IJSRD
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image Fusion
IJERA Editor
 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image Fusion
IOSRJVSP
 
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
ijsc
 
Quality Assessment of Gray and Color Images through Image Fusion Technique
Quality Assessment of Gray and Color Images through Image Fusion Technique
IJEEE
 
Review on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid technique
IRJET Journal
 
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...
Editor IJCATR
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
INFOGAIN PUBLICATION
 
Property based fusion for multifocus images
Property based fusion for multifocus images
IAEME Publication
 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819
Iman Roosta
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
acijjournal
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
acijjournal
 
Different Image Fusion Techniques –A Critical Review
Different Image Fusion Techniques –A Critical Review
IJMER
 
PCA & CS based fusion for Medical Image Fusion
PCA & CS based fusion for Medical Image Fusion
IJMTST Journal
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and Development
IJERD Editor
 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
IOSR Journals
 
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
QUALITY ASSESSMENT OF PIXEL-LEVEL IMAGE FUSION USING FUZZY LOGIC
ijsc
 
A Novel Color Image Fusion for Multi Sensor Night Vision Images
A Novel Color Image Fusion for Multi Sensor Night Vision Images
Editor IJCATR
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
IJSRD
 
Optimal Coefficient Selection For Medical Image Fusion
Optimal Coefficient Selection For Medical Image Fusion
IJERA Editor
 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image Fusion
IOSRJVSP
 
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic
ijsc
 
Quality Assessment of Gray and Color Images through Image Fusion Technique
Quality Assessment of Gray and Color Images through Image Fusion Technique
IJEEE
 
Review on Optimal image fusion techniques and Hybrid technique
Review on Optimal image fusion techniques and Hybrid technique
IRJET Journal
 
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...
Editor IJCATR
 

More from ijistjournal (20)

A SURVEY OF BIG DATA ANALYTICS..........
A SURVEY OF BIG DATA ANALYTICS..........
ijistjournal
 
7th International Conference on Machine Learning & Applications (CMLA 2025)
7th International Conference on Machine Learning & Applications (CMLA 2025)
ijistjournal
 
International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
6th International Conference on Natural Language Computing Advances (NLCA 2025)
6th International Conference on Natural Language Computing Advances (NLCA 2025)
ijistjournal
 
CLOUD COMPUTING – KEY PILLAR FOR DIGITAL INDIA
CLOUD COMPUTING – KEY PILLAR FOR DIGITAL INDIA
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
STUDY OF NAMED ENTITY RECOGNITION FOR INDIAN LANGUAGES
STUDY OF NAMED ENTITY RECOGNITION FOR INDIAN LANGUAGES
ijistjournal
 
6th International Conference on Advances in Artificial Intelligence Technique...
6th International Conference on Advances in Artificial Intelligence Technique...
ijistjournal
 
Submit Your Research Articles - International Journal of Information Sciences...
Submit Your Research Articles - International Journal of Information Sciences...
ijistjournal
 
AN OVERVIEW OF CLOUD COMPUTING FOR E-LEARNING WITH ITS KEY BENEFITS
AN OVERVIEW OF CLOUD COMPUTING FOR E-LEARNING WITH ITS KEY BENEFITS
ijistjournal
 
6th International Conference on Artificial Intelligence and Machine Learning ...
6th International Conference on Artificial Intelligence and Machine Learning ...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
7th International Conference on Machine Learning & Applications (CMLA 2025)
7th International Conference on Machine Learning & Applications (CMLA 2025)
ijistjournal
 
PHISHING DETECTION IN IMS USING DOMAIN ONTOLOGY AND CBA – AN INNOVATIVE RULE ...
PHISHING DETECTION IN IMS USING DOMAIN ONTOLOGY AND CBA – AN INNOVATIVE RULE ...
ijistjournal
 
International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
FUZZY BASED HYPERSPECTRAL IMAGE SEGMENTATION USING SUBPIXEL DETECTION
FUZZY BASED HYPERSPECTRAL IMAGE SEGMENTATION USING SUBPIXEL DETECTION
ijistjournal
 
Submit Your Research Articles - 6th International Conference on Natural Langu...
Submit Your Research Articles - 6th International Conference on Natural Langu...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
A SURVEY OF BIG DATA ANALYTICS..........
A SURVEY OF BIG DATA ANALYTICS..........
ijistjournal
 
7th International Conference on Machine Learning & Applications (CMLA 2025)
7th International Conference on Machine Learning & Applications (CMLA 2025)
ijistjournal
 
International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
6th International Conference on Natural Language Computing Advances (NLCA 2025)
6th International Conference on Natural Language Computing Advances (NLCA 2025)
ijistjournal
 
CLOUD COMPUTING – KEY PILLAR FOR DIGITAL INDIA
CLOUD COMPUTING – KEY PILLAR FOR DIGITAL INDIA
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
STUDY OF NAMED ENTITY RECOGNITION FOR INDIAN LANGUAGES
STUDY OF NAMED ENTITY RECOGNITION FOR INDIAN LANGUAGES
ijistjournal
 
6th International Conference on Advances in Artificial Intelligence Technique...
6th International Conference on Advances in Artificial Intelligence Technique...
ijistjournal
 
Submit Your Research Articles - International Journal of Information Sciences...
Submit Your Research Articles - International Journal of Information Sciences...
ijistjournal
 
AN OVERVIEW OF CLOUD COMPUTING FOR E-LEARNING WITH ITS KEY BENEFITS
AN OVERVIEW OF CLOUD COMPUTING FOR E-LEARNING WITH ITS KEY BENEFITS
ijistjournal
 
6th International Conference on Artificial Intelligence and Machine Learning ...
6th International Conference on Artificial Intelligence and Machine Learning ...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
7th International Conference on Machine Learning & Applications (CMLA 2025)
7th International Conference on Machine Learning & Applications (CMLA 2025)
ijistjournal
 
PHISHING DETECTION IN IMS USING DOMAIN ONTOLOGY AND CBA – AN INNOVATIVE RULE ...
PHISHING DETECTION IN IMS USING DOMAIN ONTOLOGY AND CBA – AN INNOVATIVE RULE ...
ijistjournal
 
International Journal of Information Sciences and Techniques (IJIST)
International Journal of Information Sciences and Techniques (IJIST)
ijistjournal
 
Online Paper Submission - International Journal of Information Sciences and T...
Online Paper Submission - International Journal of Information Sciences and T...
ijistjournal
 
FUZZY BASED HYPERSPECTRAL IMAGE SEGMENTATION USING SUBPIXEL DETECTION
FUZZY BASED HYPERSPECTRAL IMAGE SEGMENTATION USING SUBPIXEL DETECTION
ijistjournal
 
Submit Your Research Articles - 6th International Conference on Natural Langu...
Submit Your Research Articles - 6th International Conference on Natural Langu...
ijistjournal
 
Call for Papers - International Journal of Information Sciences and Technique...
Call for Papers - International Journal of Information Sciences and Technique...
ijistjournal
 
Ad

Recently uploaded (20)

IntroSlides-June-GDG-Cloud-Munich community [email protected]
IntroSlides-June-GDG-Cloud-Munich community [email protected]
Luiz Carneiro
 
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
ijab2
 
Introduction to Natural Language Processing - Stages in NLP Pipeline, Challen...
Introduction to Natural Language Processing - Stages in NLP Pipeline, Challen...
resming1
 
How Binning Affects LED Performance & Consistency.pdf
How Binning Affects LED Performance & Consistency.pdf
Mina Anis
 
Stay Safe Women Security Android App Project Report.pdf
Stay Safe Women Security Android App Project Report.pdf
Kamal Acharya
 
Learning – Types of Machine Learning – Supervised Learning – Unsupervised UNI...
Learning – Types of Machine Learning – Supervised Learning – Unsupervised UNI...
23Q95A6706
 
Microwatt: Open Tiny Core, Big Possibilities
Microwatt: Open Tiny Core, Big Possibilities
IBM
 
Pavement and its types, Application of rigid and Flexible Pavements
Pavement and its types, Application of rigid and Flexible Pavements
Sakthivel M
 
Proposal for folders structure division in projects.pdf
Proposal for folders structure division in projects.pdf
Mohamed Ahmed
 
machine learning is a advance technology
machine learning is a advance technology
ynancy893
 
Complete guidance book of Asp.Net Web API
Complete guidance book of Asp.Net Web API
Shabista Imam
 
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
João Esperancinha
 
Quiz on EV , made fun and progressive !!!
Quiz on EV , made fun and progressive !!!
JaishreeAsokanEEE
 
Machine Learning - Classification Algorithms
Machine Learning - Classification Algorithms
resming1
 
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
Taqyea
 
grade 9 science q1 quiz.pptx science quiz
grade 9 science q1 quiz.pptx science quiz
norfapangolima
 
Low Power SI Class E Power Amplifier and Rf Switch for Health Care
Low Power SI Class E Power Amplifier and Rf Switch for Health Care
ieijjournal
 
Fundamentals of Digital Design_Class_21st May - Copy.pptx
Fundamentals of Digital Design_Class_21st May - Copy.pptx
drdebarshi1993
 
Cadastral Maps
Cadastral Maps
Google
 
Industry 4.o the fourth revolutionWeek-2.pptx
Industry 4.o the fourth revolutionWeek-2.pptx
KNaveenKumarECE
 
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
ijab2
 
Introduction to Natural Language Processing - Stages in NLP Pipeline, Challen...
Introduction to Natural Language Processing - Stages in NLP Pipeline, Challen...
resming1
 
How Binning Affects LED Performance & Consistency.pdf
How Binning Affects LED Performance & Consistency.pdf
Mina Anis
 
Stay Safe Women Security Android App Project Report.pdf
Stay Safe Women Security Android App Project Report.pdf
Kamal Acharya
 
Learning – Types of Machine Learning – Supervised Learning – Unsupervised UNI...
Learning – Types of Machine Learning – Supervised Learning – Unsupervised UNI...
23Q95A6706
 
Microwatt: Open Tiny Core, Big Possibilities
Microwatt: Open Tiny Core, Big Possibilities
IBM
 
Pavement and its types, Application of rigid and Flexible Pavements
Pavement and its types, Application of rigid and Flexible Pavements
Sakthivel M
 
Proposal for folders structure division in projects.pdf
Proposal for folders structure division in projects.pdf
Mohamed Ahmed
 
machine learning is a advance technology
machine learning is a advance technology
ynancy893
 
Complete guidance book of Asp.Net Web API
Complete guidance book of Asp.Net Web API
Shabista Imam
 
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
João Esperancinha
 
Quiz on EV , made fun and progressive !!!
Quiz on EV , made fun and progressive !!!
JaishreeAsokanEEE
 
Machine Learning - Classification Algorithms
Machine Learning - Classification Algorithms
resming1
 
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
Taqyea
 
grade 9 science q1 quiz.pptx science quiz
grade 9 science q1 quiz.pptx science quiz
norfapangolima
 
Low Power SI Class E Power Amplifier and Rf Switch for Health Care
Low Power SI Class E Power Amplifier and Rf Switch for Health Care
ieijjournal
 
Fundamentals of Digital Design_Class_21st May - Copy.pptx
Fundamentals of Digital Design_Class_21st May - Copy.pptx
drdebarshi1993
 
Cadastral Maps
Cadastral Maps
Google
 
Industry 4.o the fourth revolutionWeek-2.pptx
Industry 4.o the fourth revolutionWeek-2.pptx
KNaveenKumarECE
 
Ad

ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION

  • 1. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 DOI : 10.5121/ijist.2012.2607 73 ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION Dr.Anna Saro Vijendran1 and G.Paramasivam2 1 Director, SNR Institute of Computer Applications, SNR Sons College, Coimbatore, Tamilnadu, INDIA. [email protected] 2 Asst.Professor, Department of Computer Applications, SNR Sons College, Coimbatore Tamilnadu, INDIA. [email protected] ABSTRACT A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities. KEYWORDS Image Fusion, Image Segmentation, Self Organizing Feature Maps, Code Book Generation, Multifocus Images, Gray Scale Images 1. INTRODUCTION Nowadays, image fusion has become an important subarea of image processing. For one object or scene, multiple images can be taken from one or multiple sensors.These images usually contain complementary information. Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The objective in image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Image fusion has become a common term used within medical diagnostics and treatment. Given the same set of input images, different fused images may be created depending on the specific application and what is considered relevant information. There
  • 2. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 74 are several benefits in using image fusion like wider spatial and temporal coverage, decreased uncertainty, improved reliability and increased robustness of system performance. Often a single sensor cannot produce a complete representation of a scene. Successful image fusion significantly reduces the amount of data to be viewed or processed without significantly reducing the amount of relevant information. Image fusion algorithms can be categorized into pixel, feature and symbolic levels. Pixel-level algorithms work either in the spatial domain [1, 2] or in the transform domain [3, 4 and 5]. Although pixel-level fusion is a local operation, transform domain algorithms create the fused image globally. By changing a single coefficient in the transformed fused image, all image values in the spatial domain will change. As a result, in the process of enhancing properties in some image areas, undesirable artifacts may be created in other image areas. Algorithms that work in the spatial domain have the ability to focus on desired image areas, limiting change in other areas. Multiresolution analysis is a popular method in pixel-level fusion. Burt [6] and Kolczynski [7] used filters with increasing spatial extent to generate a sequence of images from each image, separating information observed at different resolutions. Then at each position in the transform image, the value in the pyramid showing the highest saliency was taken. An inverse transform of the composite image was used to create the fused image. In a similar manner, various wavelet transforms can be used to fuse images. The discrete wavelet transform (DWT) has been used in many applications to fuse images [4]. The dual-tree complex wavelet transforms (DT-CWT), first proposed by Kingsbury[8], was improved by Nikolov [9] and Lewis [10] to outperform most other gray-scale image fusion methods. Feature-based algorithms typically segment the images into regions and fuse the regions using their various properties [10–12]. Feature-based algorithms are usually less sensitive to signal- level noise [13]. Toet [3] first decomposed each input image into a set of perceptually relevant patterns. The patterns were then combined to create a composite image containing all relevant patterns. A mid-level fusion algorithm was developed by Piella [12, 15] where the images are first segmented and the obtained regions are then used to guide the multiresolution analysis. Recently methods have been proposed to fuse multifocus source images using the divided blocks or segmented regions instead of single pixels [16, 17, and 18]. All the segmented region-based methods are strongly dependent on the segmentation algorithm. Unfortunately, the segmentation algorithms, which are of vital importance to fusion quality, are complicated and time-consuming. The common transform approaches for fusion of mutifocus images include the discrete wavelet transform (DWT) [19],curvelet transform [20] and non subsampling contourlet transform (NSCT) [21]. Recently, a new multifocus image fusion and restoration algorithm based on the sparse representation has been proposed by Yang and Li [22]. A new multifocus image fusion method based on homogeneity similarity and focused regions detection has been proposed during the year 2011 by Huafeng Li , Yi Chai , Hongpeng Yin and Guoquan Liu [23] . Most of the traditional image fusion methods are based on the assumption that the source images are noise free, and they can produce good performance when the assumption is satisfied. For the traditional noisy image fusion methods, they usually denoise the source images, and then the denoised images are fused. The multifocus image fusion and restoration algorithm proposed by Yang and Li [22] performs well with both noisy and noise free images, and outperforms traditional fusion methods in terms of fusion quality and noise reductionin the fused output. However, this scheme is complicated and time-consuming especially when the source images are noise free. The image fusion algorithm based on homogeneity similarity proposed by HuafengLi , Yi Chai , Hongpeng Yin and Guoquan Liu [23] aims at solving the fusion problem of clean and noisy multifocus images. Further in any region based fusion algorithm, the fusion results are affected by the performance of segmentation algorithm. The various segmentation algorithms are
  • 3. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 75 based on thresholding and clustering but the partition criteria used by these algorithms often generates undesired segmented regions. In order to overcome the above said problems a new method for segmentation using Self- organizing Feature Maps which consequently helps in fusion of images dynamically to the desired degree of information retrieval depending on the application has been proposed in this paper. The proposed algorithm is compatible for any type of image either noisy or clean. The method is simple and since mapping of image is carried out by Self-organizing Feature Maps all the information in the images will be preserved. The images used in image fusion should already be registered. A novel image fusion algorithm based on self organizing feature map is proposed in this paper. The outline of this paper is as follows: In Section 2, the Self-organizing Feature Maps is briefly introduced. Section 3 describes the algorithm for Code Book generation using Self Organizing Feature Maps. Section 4 describes the proposed method of Fusion. Section 5 details the Experimental Analysis and Section 6 gives the conclusion of this paper. 2. SELF-ORGANIZING FEATURE MAP Self-organizing Feature Map (SOM) is a special class of Artificial Neural Network based on competitive learning. It is an ingenious Artificial Neural Network built around a one or two- dimensional lattice of neurons for capturing the important features contained in the input. The Kohonen technique creates a network that stores information in such a way that any topological relationships within the training set are maintained. In addition to clustering the data into distinct regions, regions of similar properties are put into good use by the Kohonen maps. The primary benefit is that the network learns autonomously without the requirement that the system be well defined. System does not stop learning but instead continues to adapt to changing inputs. This plasticity allows it to adapt as the environment changes. A particular advantage over other artificial neural networks is that the system appears well suited to parallel computation. Indeed the only global knowledge required by each neuron is the current input to the network and the position within the array of the neuron which produced the maximum output Kohonen networks are grid of computing elements, which allows identifying the immediate neighbours of a unit. This is very important, since during learning, the weights of computing units and their neighbours are updated. The objective of such a learning approach is that neighbouring units learn to react to closely related signals. A Self-organizing Feature Map does not need a target output to be specified unlike many other types of network. Instead, where the node weights match the input vector, that area of the lattice is selectively optimized to more closely resemble the data for the class, the input vector is a member. From an initial distribution of random weights, and over many iterations, the Self- organizing Feature Map eventually settles into a map of stable zones. Each zone is effectively a feature classifier. The output is a type of feature map of the input space. In the trained network, the blocks of similar values represent the individual zones. Any new, previously unseen input vectors presented to the network will stimulate nodes in the zone with similar weight vectors. Training occurs in several steps and over many iterations. Each node's weights are initialized. A vector is chosen at random from the set of training data and presented to the lattice. Every node is examined to calculate which one's weights are most like the input vector. The winning node is commonly known as the Best Matching Unit (BMU). The radius of the neighbourhood of the Best Matching Unit is now calculated. This is a value that
  • 4. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 76 starts large, typically set to the 'radius' of the lattice, but diminishes each time-step. Any nodes found within this radius are deemed to be inside the Best Matching Unit‘s neighbourhood. Each neighbouring node's weights are adjusted to make them more like the input vector. The closer a node is to the Best Matching Unit; the more its weights get altered. The procedure is repeated for all input vectors for number of iterations. Prior to training, each node's weights must be initialized. Typically these will be set to small-standardized random values. To determine the Best Matching Unit, one method is to iterate through all the nodes and calculate the Euclidean distance between each node's weight vector and the current input vector. The node with a weight vector closest to the input vector is tagged as the Best Matching Unit. After the Best Matching Unit has been determined, the next step is to calculate which of the other nodes are within the Best Matching Unit's neighbourhood. All these nodes will have their weight vectors altered in the next step. A unique feature of the Kohonen learning algorithm is that the area of the neighbourhood shrinks over time to the size of just one node. After knowing the radius, iterations are carried out through all the nodes in the lattice to determine if they lay within the radius or not. If a node is found to be within the neighbourhood then its weight vector is adjusted. Every node within the Best Matching Unit‘s neighbourhood (including the Best Matching Unit) has its weight vector adjusted. In Self-organizing Feature Map, the neurons are placed at the lattice nodes; the lattice may take different shapes: rectangular grid, hexagonal, even random topology . Figure 1. Self Organizing Feature Map Architecture The neurons become selectivity tuned on various input patterns in the course of competitive learning process. The locations of the neurons (i.e. the winning neurons) so tuned, tend to become ordered with respect to each other in such a way that a meaningful coordinate system for different input features to be created over the lattice.
  • 5. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 77 SOM Neural Network Training GUI: 3. CODE BOOK GENERATION USING SELF-ORGANIZING FEATURE MAP Given a two-dimensional input image pattern to be mapped onto a two dimensional spatial organization of neurons located at different positions (i, j)’s on a rectangular lattice of size nxn. Thus, for a set of nxn points on the two-dimensional plane, there would be n2 neurons Nij:1 ≤ i,j ≤ n, and for each neuron Nij there is an associated weight vector denoted as Wij. In Self-organizing Feature Map, the neuron with minimum distance between its weight vector Wij and the input vector X is the winner neuron (k,l), and it is identified using the following equation. ||X - W kl|| = min [min ||X - W ij ||] 1≤i≤ n i≤j≤ n (1) After the position of the (i,j)th winner neuron is located in the two-dimensional plane, the winner neuron and its neighbourhood neurons are adjusted using Self- organizing Feature Map learning rule as: Wij(t+1) = Wij(t) + α || X -Wij(t) || (2) Where, α is the Kohonen’s learning rate to control the stability and the rate of convergence. The winner weight vector reaches equilibrium when Wij(t+1)=Wij(t). The neighbourhood of neuron Nij is chosen arbitrary. It can be a square or a circular zone around Nij of arbitrary chosen radius. Algorithm 1: The image A (i,j) of size 2 N x 2 N is divided into blocks, each of them of size 2 n x 2n pixels, n < N. 2: A Self-organizing Feature Map Network is created with a codebook consisting of M
  • 6. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 78 neurons (mi: i=1,2,.....M). The total M neurons are arranged in a hexagonal lattice, and for each neuron there is an associated weight vector Wi = [wi1 wi2.......wi2 2n ]. 3: The weights vectors are initiated for all the neurons in the lattice with small random values. 4: The learning input patterns (image blocks) are applied to the network. The Kohonen’s competitive learning process identifies the winning neurons that best match the input blocks. The best matching criterion is the minimum Euclidean distance between the vectors. Hence, the mapping process Q that identifies the neuron that best matches the input block X is determined by applying the following condition. Q(X) = arg i min || X - Wi || i = 1,2,……..M (3) 5: At equilibrium, there are m winner neurons per block or m codewords per block. Hence, the whole image is represented using m number of codewords. 6: The indices of the obtained codewords are stored. The set of indices of all winner neurons along with the codebook are stored. 7: The reconstructed image blocks of same size as the original ones, will be restored from the indices of the codewords. 4. PROPOSED METHOD OF FUSION Let us consider two pre-registered grayscale 8 bit images A and B formed of the same scene or object. The first image A is decomposed into sub-images and given as input to the Self-organizing Feature Map Neural Network. In order to preserve all the gray values of the image, the codebook size for compressing an 8 bit image is chosen to be the maximum possible number of gray levels, say 256. Since the weight values after training has to represent the input level gray levels, random values ranging from 0 to 255 are assigned as initial weights. When sub-images of size say 4 x 4 is considered as the input vector, then there will be 16 nodes in the input layer and the Kohonen layer consists of 256 nodes arranged in a 16 x 16 array. The input layer takes as input the gray- level values from all the 16 pixels of the gray-level block. The weights assigned between node j of the Kohonen layer and the input layer represents the weight matrix. For all the 256 nodes we have Wji for j = 0,1,…,255 and i = 0,1…15. Once the weights are initialized randomly network is ready for training. The image block vectors are mapped with the weight vectors. The neighborhood is initially chosen; say 5 x 5and then reduced gradually to find the best matching node. The Self- organizing Feature Map generates the codebook according to the weight updation. The set of indices of all the winner neurons for the blocks along with the code book are stored for retrieval. The image A is retrieved by generating the weight vectors of each neuron from the index values, which gives the pixel value of the image. For each index value the connected neuron is found. The weight vector of that neuron to the input layer neuron is generated. The values of the neuron weights are the gray level for the block. The gray level value thus obtained is displayed as pixels. Thus we get the image A back in its original form. Now the Image B is given as input to the Neural Network. Since the features in Images A and B are the same, when B is given as input to the trained Network the code book for the Image B will be generated in minimum simulation time without loss in information. Also since the images are registered the index values which represent the position of pixels will be the same in the two
  • 7. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 79 images. For the same index value the value of the neuron weights of both the images are compared and the higher value is selected. The procedure is repeated for all the indices until a weight vector of optimal strength is generated. The image retrieved from this optimal weight vector is the fused image which will represent all the gray values in its optimal values. The procedure can be repeated in various combinations with multiple images until the desired result is achieved. 5. EXPERIMENTAL ANALYSIS AND RESULTS The Experimental analysis of the proposed algorithm has been performed using large numbers of images having different content for simulations and different objective parameters discussed in the paper. In order to evaluate the fusion performance, the first experiment is performed on one set of perfectly registered multifocus source images. The dataset consists of multifocus images. The use of different pairs of multifocus images of different scenes including all categories of text, text+object and only objects allows evaluating the proposed algorithm in true sense. The proposed algorithm is simulated in Matlab7. The proposed algorithm is evaluated based on the quality of the fused image obtained. The robustness of the proposed algorithm that is to obtain consistent good quality fused image with different categories of images like standard images, medical images, satellite images has been evaluated. The average computational time to generate final fused image from the source image size of 128 x 128 using the proposed algorithm has been calculated and the average computational time is 96 seconds. The quality of the fused image with the source images has been compared in terms of RMSE and PSNR values. The experimental results obtained for fusion of grayscale images adopting the proposed algorithm are shown in Table 1. The source multifocus images and the fused images of different types are shown in Figures 1 to 4. The histogram and image difference for lena and bacteria images are shown in Figure 5 and 6 respectively. Table 1.a Images RMSE Image A Image B Fused Image Lena 6.2856 6.0162 3.3205 Bacteria 6.8548 6.421 6.227 Satellite map 4.0043 5.325 3.8171 MRI -Head 4.3765 4.9824 1.5163 Table 1.b Images PSNR Image A Image B Fused Image Lena 30.381 31.701 38.0481 Bacteria 28.194 27.989 32.5841 Satellite map 32.077 31.582 33.2772 MRI -Head 30.787 33.901 45.3924
  • 8. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 80 Figure 1.a Figure 1.b Figure 1.c Figure 1.Lena Figure 2.a Figure 2.b Figure 2.c Figure 2.Bacteria Figure 3.a Figure 3.b Figure 3.c Figure 3. Satellite Map
  • 9. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 Figure 4.a Images (a) and (b ) are the multi focused images Image (c) is the fused version of (a)and(b) Figure 5. Histogram for lena and bacteria images Figure 6. The Difference images between between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 4.a Figure 4.b Figure 4.c Figure 4.MRI –Head Images (a) and (b ) are the multi focused images Image (c) is the fused version of (a)and(b) Figure 5. Histogram for lena and bacteria images Figure 6. The Difference images between fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j) Difference between (e)and (f). International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 81 fused image and the source image :(g) Difference between (a)and (c), (h) Difference between (b) and (c), (i) Difference between (d) and (f), (j)
  • 10. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 82 6. CONCLUSIONS In this Paper a simple method of fusion of Images has been proposed. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image block cluster centroids but also preserves two main features. Topologically neighbour blocks in the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight vectors in the input space. Hence their will not be any loss of information in the process. The proposed method is dynamic in the sense that depending on the application the optimal weight vectors are generated and also redundant values as well as noise can be ignored in this process resulting in lesser simulation time. The method can be extended to colour images also. REFERENCES [1] S. Li, J.T. Kwok, Y. Wang, Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images, Information Fusion 3 (2002) 17–23. [2] A. Goshtasby, 2-D and 3-D Image Registration for Medical, Remote Sensing, and Industrial Applications, Wiley Press,2005. [3] A. Toet, Hierarchical image fusion, Machine Vision and Applications 3 (1990) 1–11. [4] H. Li, S. Manjunath, S. Mitra, Multisensor image fusion using the wavelet transform, Graphical Models and Image Processing 57 (3) (1995) 235–245. [5] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, M. Halliwell, and P.N.T. Wells. Image fusion using a 3-d wavelet transform. In Proc. 7th International Conference on Image Processing And Its Applications, pages 235-239, 1999. [6] P.J. Burt, A.Rosenfeld (Ed.), Multiresolution Image Processing and Analysis, Springer- Verlag, Berlin, 1984, pp. 6–35. [7] P.J. Burt, R.J. Kolczynski, Enhanced image capture through fusion, in International Conference on Computer Vision, 1993, pp. 173–182. [8] N. Kingsbury, Image processing with complex wavelets, Silverman, J. Vassilicos (Eds.), Wavelets: The Key to Intermittent Information, Oxford University Press, 1999, pp.165–185. [9] S.G. Nikolov, P. Hill, D.R. Bull, C.N. Canagarajah, Wavelets for image fusion, in: A. Petrosian, F. Meyer (Eds.), Wavelets in Signal and Image Analysis, Kluwer Academic Publishers, The Netherlands, 2001, pp. 213–244. [10] J.J. Lewis, R.J. O’Callaghan, S.G. Nikolov, D.R. Bull, C.N. Canagarajah, Region-based image fusion using complex wavelets, in: Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, June 28–July 1, 2004, pp. 555–562. [11] Z. Zhang, R. Blum, Region-based image fusion scheme for concealed weapon detection, in: ISIF Fusion Conference, Annapolis, MD, July 2002. [12] G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information Fusion 4 (2003) 259–280. [13] G. Piella, A region-based multiresolution image fusion algorithm, Proceedings of the 5th International Conference on Information Fusion, Annapolis, MS, July 8–11, 2002, pp. 1557–1564. [14] S.G. Nikolov, D.R. Bull, C.N. Canagarajah, 2-D image fusion by multiscale edge graph combination, in: Proceedings of the 3rd International Conference on Information Fusion, Paris, France, July 10–13, 1, 2000, pp.16–22. [15] G. Piella, A general framework for multiresolution image fusion from pixels to regions, Information Fusion 4 (2003) 259–280. [16] H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493. [17] S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985. [18] V. Aslanta, R. Kurban, Expert Systems with Applications 37 (12) (2010) 8861. [19] Y. Chai, H.F. Li, M.Y. Guo, Optics Communications 248 (5) (2011) 1146. [20] S.T. Li, B. Yang, Pattern Recognition Letters 29 (9) (2008) 1295 [21] Q. Zhang, B.L. Guo, Signal Processing 89 (2009) 1334. [22] B. Yang, S.T. Li, IEEE Transactions on Instrumentation and Measurement 59 (4) (2010) 884.
  • 11. International Journal of Information Sciences and Techniques (IJIST) Vol.2, No.6, November 2012 83 [23] Multifocus image fusion and denoising scheme based on homogeneity similarity Huafeng Li , Yi Chai , Hongpeng Yin, Guoquan Liu , Optics Communications Journal , September 2011. ACKNOWLEDGMENTS The authors thanks to our Management of SNR Sons Institutions for allowing us to utilize their resources for doing our research work. Our sincere thanks to our Principal & Secretary Dr.H.Balakrishnan M.Com., M.Phil.,Ph.D., for his support and encouragement in our research work. And my grateful thanks to my guide Dr. Anna Saro Vijendran MCA.,M.Phil.,Ph.D., Director of MCA., SNR Sons College, Coimbatore-6 for her valuable guidance and giving me a lot of suggestions & proper solutions for critical situations in our research work. The authors thank the Associate Editor and reviewers for their encouragement and valued comments, which helped in improving the quality of the paper. AUTHOR’S BIOGRAPHY Dr. Anna Saro Vijendran received the Ph.D. degree in Computer Science from Mother Teresa Womens University, Tamilnadu, India, in 2009. She has 20 years of experience in teaching. She is currently working as the Director, MCA in SNR Sons College, Coimbatore, Tamilnadu, India. She has presented and published many papers in International and National conferences. She has authored and co-authored more than 30 refereed papers. Her professional interests are Image Processing, Image fusion, Data mining and Artificial Neural Networks. G.Paramasivam Ph.D (PT) Research scholar. He is currently the Asst Professor, SNR Sons College, Coimbatore, Tamilnadu, India. He has 10 years of teaching experience. His technical interests include Image Fusion and Artificial Neural networks.