ISC Unit II Topic-5
ISC Unit II Topic-5
05/03/24 1
contents
• Introduction
• Architecture
• Designing
• Learning strategies
• MLP vs RBFN
05/03/24 2
Introduction
• Completely different approach by viewing the design of a neural network as a curve-fitting
(approximation) problem in high-dimensional space ( I.e MLP )
• Radial Basis Function (RBF) Networks are a particular type of Artificial Neural Network used for
function approximation problems. RBF Networks differ from other neural networks in their three-
layer architecture, universal approximation, and faster learning speed.
• Radial Basis Functions are a special class of feed-forward neural networks consisting of three layers:
an input layer, a hidden layer, and the output layer. This is fundamentally different from most neural
network architectures, which are composed of many layers and bring about nonlinearity by
recurrently applying non-linear activation functions.
• The input layer receives input data and passes it into the hidden layer, where the computation occurs.
The hidden layer of Radial Basis Functions Neural Network is the most powerful and very different
from most Neural networks. The output layer is designated for prediction tasks like classification or
regression.
05/03/24 3
introduction
In MLP
05/03/24 4
introduction
In RBFN
05/03/24 5
Working
• The fundamental idea of Radial Basis Functions is that an item's predicted target
value is likely to be the same as other items with close values of predictor
variables.
• An RBF Network places one or many RBF neurons in the space described by the
predictor variables. The space has multiple dimensions corresponding to the
number of predictor variables present.
• We calculate the Euclidean distance from the evaluated point to the center of each
neuron.
• A Radial Basis Function (RBF), also known as kernel function, is applied to the
distance to calculate every neuron's weight (influence).
• The name of the Radial Basis Function comes from the radius distance, which is the
argument to the function. Weight = RBF[distance)The greater the distance of a
05/03/24 6
neuron from the point being evaluated, the less influence (weight) it has.
introduction
05/03/24 7
introduction
F(x) = wi h(x)
05/03/24 8
architecture
x1 h1
h2 W1
x2
W2
x3 h3 W3 f(x)
Wm
xn hm
Input layer
Hidden layer Output layer
05/03/24 9
architecture
Three layers
• Input layer
– The input layer consists of one neuron for every predictor variable. The input neurons
pass the value to each neuron in the hidden layer. N-1 neurons are used for categorical
values, where N denotes the number of categories. The range of values is standardized
by subtracting the median and dividing by the interquartile range.
• Hidden layer
– Hidden units provide a set of basis function
– The hidden layer contains a variable number of neurons (the ideal number determined
by the training process). Each neuron comprises a radial basis function centered on a
point. The number of dimensions coincides with the number of predictor variables. The
radius or spread of the RBF function may vary for each dimension.
• Output layer
– The value obtained from the hidden layer is multiplied by a weight related to the
neuron and passed to the summation. Here the weighted values are added up, and the
sum is presented as the network's output. Classification problems have one output per
target category, the value being the probability that the case evaluated has that
category.
05/03/24 10
architecture
f(x) = wjhj(x)
j=1
05/03/24 11
designing
• Require
– Selection of the radial basis function width parameter
– Number of radial basis neurons
05/03/24 12
designing
05/03/24 13
designing
05/03/24 14
learning strategies
• Two levels of Learning
– Center and spread learning (or determination)
– Output layer Weights Learning
• Make # ( parameters) small as possible
– Curse of Dimensionality
05/03/24 15
learning strategies
05/03/24 16
learning strategies
05/03/24 17
learning strategies
05/03/24 18
learning strategies
Self-organized selection of
centers(1)
• Hybrid learning
– self-organized learning to estimate the centers of RBFs
in hidden layer
– supervised learning to estimate the linear weights of the
output layer
• Self-organized learning of centers by means of
clustering.
• Supervised learning of output weights by LMS
algorithm.
05/03/24 19
learning strategies
Self-organized selection of
centers(2)
• k-means clustering
1. Initialization
2. Sampling
3. Similarity matching
4. Updating
5. Continuation
05/03/24 20
learning strategies
05/03/24 21
learning strategies
Learning formula
• Linear weights (output layer)
(n) N
E (n)
e j (n)G (|| x j t i (n) ||Ci ) wi (n 1) wi (n) 1 , i 1,2,..., M
wi (n) j 1 wi (n)
E (n) N
E (n)
2 wi (n) e j (n)G ' (|| x j t i (n) ||Ci ) i1[x j t i (n)] t i (n 1) t i (n) 2 , i 1,2,..., M
t i (n) j 1 t i (n)
E (n) N
1
wi (n) e j (n)G ' (|| x j t i (n) ||Ci )Q ji (n) Q ji (n) [x j t i (n)][x j t i (n)]T
i (n) j 1
E (n)
i 1 (n 1) i 1 (n) 3
i1 (n)
05/03/24 22
MLP vs RBFN
Global hyperplane Local receptive field
EBP LMS
Local minima Serious local minima
Smaller number of hidden Larger number of hidden
neurons neurons
Shorter computation time Longer computation time
Longer learning time Shorter learning time
05/03/24 23