6 CS1AS16 Neural Computing
6 CS1AS16 Neural Computing
A traditional computer has a big fast processor that can perform millions operations per second.
The human brain has a lot of small and slow processors – neurons. What makes the brain powerful is
that is has millions of neurons that are connected together and can learn.
Neural computing tries to mimic the model of a brain.
“Neural computing is the study of networks of adaptable nodes which, through a process of learning
from task examples, store experiential knowledge and make it available for use.”
As neural networks are based on the functioning of a human brain, it is useful to know the basics of
how a brain works:
A neuron receives input from other neurons via dendrites
If it's sufficiently stimulated, it generates an impulse
The impulse travels via the axon to other neurons
Junctions between the neurons are called synapses
The structure of a neuron:
Artificial neural network(ANN) is a model of what goes one in the brain but a simplified one:
A model of neuron – a simplified version of a real one
These neurons can be connected in many different ways
There can be many types of ANN.
What can they do?
o Classification – for a given input say it belongs to class A or B
o Association – map the input to an output
o Prediction – calculate output for a given input
o Control – generate control signal
The first model of a neuron was the McCulloch Pitts neuron. Many types of neural networks evolved
from it.
Perceptron
Minsky and Papert showed that a single layer is inadequate – we can have multiple layer network:
Feedforward network – data goes forwards until it comes out – like in the examples above.
A network must be set up properly in order to do something useful – meaning the weights between the
connections must be set.
The network can learn:
o Supervised learning - through training data – you provide inputs, the network
calculates outputs but the expected outputs are also known – the network can adjust
the weights based on the error:
o Unsupervised learning – the data is provided to all neurons and they work out by
themselves how to operate – groups of them start recognizing similar values (self
organizing nets). The neurons are often arranged in a rectangular grid and neurons in
one area respond to certain inputs.
Example: network trained to recognize colors:
Here, similar colors are close to each other.
Recurrent neural networks(RNN) – networks with feedback. They're more complicated than
feedforward networks.