0% found this document useful (0 votes)
2 views

EE05425Notes-12

The document discusses unsupervised learning, highlighting its differences from supervised learning and its applications in data mapping, compression, and clustering. It covers committee machines, self-organizing maps (SOM), and learning vector quantization (LVQ) as methods for improving classification and problem-solving in neural networks. Additionally, it introduces optimization techniques like simulated annealing to enhance network training and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

EE05425Notes-12

The document discusses unsupervised learning, highlighting its differences from supervised learning and its applications in data mapping, compression, and clustering. It covers committee machines, self-organizing maps (SOM), and learning vector quantization (LVQ) as methods for improving classification and problem-solving in neural networks. Additionally, it introduces optimization techniques like simulated annealing to enhance network training and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

NN&FL UNIT-II

Lecture Notes-12

Unsupervised Learning

Unsupervised learning, in contrast to supervised learning, does not provide the network with target output values. This

isn't strictly true, as often (and for the cases discussed in the this section)

the output is identical to the input. Unsupervised learning usually performs a mapping from

input to output space, data compression or clustering.


Combining Networks

Committee machines use a well known paradigm of computer science, the divide and
conquer strategy. A complex task can be subdivided into smaller ones and solved
individually. A group of ANNs each perform a role as part of the complex task. This
combination of expert knowledge can provide a powerful method for complex problem
solving.

The structure of committee machines can be static (the output of expert NNs are combined without
involving inputs, which is involved in ensemble averaging and boosting), or dynamic (the outputs involve
the inputs, which are used when considering a mixture of experts or hiearchies)

A committee machine structure can be built over a distributed network, with individual
experts taking advantage of multiprocessing power, creating a very powerful and effective
tool.

At this stage, it is not necessary to delve to far into combining networks, but it can be seen how this strategy can
be an effective method of machine learning.

Self Organising Maps

Kohonen Self Organising Maps

This is an unsupervised learning algorithm which maps an input vector to an output array of nodes. A reference

vector, wji, is assigned to each of these nodes. An input vector, xi, is compared with it and the best match of all the

neurons is the response. This is what is involved in competitive learning. The winner is the neuron with the best

response; the neuron where the Euclidean distance between its weight s and the input pattern is the smallest.

Equation 1. Euclidean distance

T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II

Lecture Notes-12

. (possible implementation...)

The next stage is cooperation. A neighbourhood of neurons is defined around the winning neuron. The neurons in
this neighbourhood will all have the chance to update their weights, though not as much as the winner, and the
size of the update decreases for neurons as they are further away. Neighbourhoods are usually symmetrical.

The neighbourhood can be defined in one of a number of ways, including defining a square around the winning
neuron or commonly using a Gaussian function.

Equation 2. Gaussian Function to Define Neighbourhood Size

In this way, the weight changes can be made to improve the winning neuron, that is make it
closer to the input. This is adaption of the excited neurons, minimising the expression in
equation 7 further through adjusting the synaptic weights between the connections in the output
layer.

Learning Vector Quantisation (LVQ)

Learning Vector Quantisation is a technique which improves the performance of SOMs for
classification. SOM is an unsupervised learning algorithm, but can be used for supervised learning. For example,
each neuron of the SOM can be labelled after training. By further supervised learning using LVQ, a two stage
learning adaptively is used to solve a classification problem.

The algorithm for SOM learning with LVQ is as follows: execute SOM

algorithm to obtain learned set of vectors; label the neurons;

LVQ: Find the weight that is closest to the input pattern X;

If the classes of the output neuron vector and the input are the same: Move the

neuron vector (weight) closer to the input; Otherwise moving them away

from the input (equation). Equation 9. Learning Vector Quantisation

T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II

Lecture Notes-12

LVQ improves classification especially when classes overlap. A combined SOM and LVQ approach is much
better in defining decision boundaries.

SOM is used for unsupervised learning, classification and optimisation. Also, returning to

the RBFN example, SOM learning can be used for the first (input to hidden layer) stage of

learning.

Learning Using Optimisation Procedures

There are two approaches that can be taken using optimisation for neural networks: 1) optimising the set of

weights, thereby using an optimsation procedure as a learning rule or 2) training the network using an existing

learning rule and optimising the learning rule parameters and network topology.

Network training can be seen as finding the optimal weights for a network for a given problem. Therefore well known

optimisation procedures have been used to do this, including Newton's method, conjugate gradient, simulated
annealing and evolutionary methods (these have been discussed earlier). Outlined next is a simulated annealing
approach.

Simulated Annealing

This optimisation technique has a basis in a physical analogy, of slow cooling of materials which
increases the order of the molecules as it goes from a hot (high disorder) to a cold solid (high order). Successively

quenching, or gradually minimising the energy of the system increases its order
and allows it to find an optimal state. Often, successive heating and cooling the system can
T.N.Shankar,CSE,GMRIT
NN&FL UNIT-II

Lecture Notes-12

achieve better performance as so the energy barriers can be overcome, giving the chance of

escaping from local optima. This technique has been successful in many NP hard optimisation
problems (such as the travelling salesman problem) and has been applied in optimising the

topology and learning rule of a neural network acting on a given data set.

T.N.Shankar,CSE,GMRIT

You might also like