0% found this document useful (0 votes)
35 views

Submitted By: Alka Gupta 2011emcs01

This document discusses supervised and unsupervised learning methods. It provides details about self-organizing maps (SOMs), including that they are an unsupervised neural network developed by Teuvo Kohonen in the 1980s that produces a low-dimensional representation of input data called a map. SOMs use competition and cooperation between neurons as well as synaptic adaptation during training to organize the map.

Uploaded by

Rajesh Gupta
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Submitted By: Alka Gupta 2011emcs01

This document discusses supervised and unsupervised learning methods. It provides details about self-organizing maps (SOMs), including that they are an unsupervised neural network developed by Teuvo Kohonen in the 1980s that produces a low-dimensional representation of input data called a map. SOMs use competition and cooperation between neurons as well as synaptic adaptation during training to organize the map.

Uploaded by

Rajesh Gupta
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Submitted by Alka gupta 2011emcs01

Supervised

Show the computer something and tell it what it is. Classification occurs partly before system training to select training information. Additional classification and verification occurs after system training by a human problem domain expert.

Unsupervised

Show the computer something and let it sense what it is. Classification occurs after system training by a problem domain expert, usually by identifying data drawn to a cluster center.

Unsupervised methods often use data clustering

Developed by Teuvo Kohonen of Helsinki in the early

1980s.
It is a type of artificial neural network that is trained

using unsupervised learning to produce a lowdimensional (typically 2-D), discretized representation of the input space of the training samples, called MAP.
They are different from other ANNs they use a

neighbourhood function
SOMs useful for visualizing low-dimensional views of

high-dimensional data,

Competition Cooperation

Synaptic adaptation

Let an input vector be

X= [X1,X2.Xm]

and

the synaptic weight vector of neuron j be Wj= [Wj1,Wj2,..Wjm] , j=1,2..l


(where l is the total number of neurons in the network)

Competition aims at finding the best match of input vector X with the synaptic weight vectors Wj i.e. minimising the Euclidean distance between the vectors X and Wj. Let i(X) be the index to denote neuron that best matches X,then i(X)= arg min X-wj ,j=1,2,.l
j

The winner neuron

Make a two dimensional array, or map, and randomize it. Present training data to the map and let the cells on the

map compete to win in some way. Euclidean distance is usually used. Stimulate the winner and some friends in the neighborhood. Do this a bunch of times. The result is a 2 dimensional weight map.

One-dimensional
i (completely interconnected)

Two-dimensional (connections omitted, only neighborhood relations shown [green])

For n-dimensional input space and m output neurons: Choose random weight vector wi for neuron i, i = 1, ..., m Choose random input x Determine winner neuron k:

||wk x|| = mini ||wi x|| (Euclidean distance)

Update all weight vectors of all neurons i in the

neighborhood of neuron k: wi := wi + (i, k)(x wi) (wi is shifted towards x) Otherwise, narrow neighborhood function and learning parameter and go to (2).

If convergence criterion met, STOP.

There are two ways to interpret a SOM : In the training phase weights of the whole

neighbourhood are moved in the same direction as similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar apart.
More neurons point to regions with high training sample

concentration and fewer where the samples are scarce.

You might also like