Unsupervised Learning Using Back Propagation in Neural Networks
Unsupervised Learning Using Back Propagation in Neural Networks
net/publication/283615095
CITATIONS READS
0 164
1 author:
Manish Bhatt
University of New Orleans
11 PUBLICATIONS 146 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Manish Bhatt on 31 January 2018.
Unsupervised Learning no output labels are available. The goal of exploratory analysis
is to group a set of unlabeled input vectors into a certain
hidden “natural” groups based on the underlying data
using Back-Propagation structure. It can be somewhat different from unsupervised
predictive algorithms like Vector Quantization, Probability
in Neural Networks Density function approximation etc. in the sense that it might
not entail the accurate characterization of unobserved samples
Manish Bhatt, Member, IEEE based on a certain learned or known probability distribution.
However in practice, predictive vector quantizers are used for
Abstract—Data analysis is a primal skill of human beings. non-predictive clustering analysis also[8][9][10].
Cluster analysis, primitive exploration of data based on little or It is important to note that clustering analysis is a subjective
no prior knowledge of the structure underlying it, consists of
research developed across various disciplines. Artificial Neural process in Nature[14], which means that there can be no
Networks on the other hand are abstractions of learning systems absolute judgement as to the efficiency of the process of
based on the human brain. The brain basically learns from clustering. Also, the term difference between the clusters is
experience, associations and disassociations, and from error vague, because of which experts define clustering as grouping
improvement, and clusters some new object based on comparing
of instance together so that the instances in one cluster are
known features from known objects. In this paper, we propose a
modified back-propagation algorithm for neural network that more similar to one another than instances belonging to
performs supervised learning using a set of feature vectors. different clusters. Clustering algorithms partition data into a
certain number of clusters based on certain set of feature
Index Terms—neural networks, unsupervised learning, back vectors. The fact that there is no universally agreed upon
propagation, and machine learning. definition makes the process of clustering so interesting. Most
researchers define clusters based on internal similarity, and
I. INTRODUCTION
external dissimilarity, i.e instances in the same cluster must be
● Divide all of dimension to two different It can be seen that the technique mentioned has certain
similarities with the traditional k-means algorithm, but the
vectors and of dimensions a and b such important distinction to note is that k-means looks for an
that a+b = n. exemplar point whereas our technique looks for exemplar
● Initialize p different neural networks with all zero hypersurfaces. It is to be noted that the iterative process does
initial weights. All neural networks need to start from not guarantee the convergence to a global minimum.
the same state, and in our case, the states were chosen Stochastic processes like simulated annealing and genetic
to be weights that were all zeros. algorithm, can find the global minimum with expensive
● Initialize other variables to save the current state of computations.
these networks.