0% found this document useful (0 votes)
56 views

Neuron Model and Network Architectures

- An artificial neuron model computes the weighted sum of its inputs, adds a bias, and passes the result through an activation function. This determines whether the neuron "fires" or becomes active. - Neural networks consist of interconnected artificial neurons arranged in layers. Single and multi-layer feedforward networks are commonly used architectures. Recurrent networks also have feedback connections. - Neural networks are applied to problems like pattern recognition, function approximation, and associative memory by training connection weights with a learning algorithm. Popular current models include deep learning architectures and multilayer perceptrons.

Uploaded by

Mercy
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Neuron Model and Network Architectures

- An artificial neuron model computes the weighted sum of its inputs, adds a bias, and passes the result through an activation function. This determines whether the neuron "fires" or becomes active. - Neural networks consist of interconnected artificial neurons arranged in layers. Single and multi-layer feedforward networks are commonly used architectures. Recurrent networks also have feedback connections. - Neural networks are applied to problems like pattern recognition, function approximation, and associative memory by training connection weights with a learning algorithm. Popular current models include deep learning architectures and multilayer perceptrons.

Uploaded by

Mercy
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Neuron Model and

Network Architectures
Artificial Neuron Model
x0= +1
x1 bi :Bias
wi1

x 2

Σ f→ai
x3
Neuroni Activation function Output

wim

xm Synaptic 
Input Weights
Bias

a =f(n )=f( 
WijXj  bi )
i i
j 1 
i 
n

0
W
Bias
An artificial neuron:
- computes the weighted sum of its input (called
its net input)
- adds its bias
- passes this value through an activation
function
We say that the neuron “fires” (i.e. becomes
active) if its output is above zero.
Bias
• Bias can be incorporated as another weight clamped to
a fixed input of +1.0.
• This extra free variable (bias) makes the neuron more
powerful.
n

ai = f (ni) = f( wijxj) = f(wi.xj)


j 0
Activation functions
Also called the squashing function as it limits the
amplitude of the output of the neuron.

Many types of activations functions are used:


– linear: a = f(n) = n

threshold: a = {1 if n >= 0
(hardlimiting)
0 if n < 0
Activation functions

- sigmoid: a = 1/(1+e-n)
Activation functions
Artificial Neural Networks
• A neural network is a massively parallel, distributed
processor made up of simple processing units (artificial
neurons).
• It resembles the brain in two respects:
– Knowledge is acquired by the network from its environment
through a learning process
– Synaptic connection strengths among neurons are used to
store the acquired knowledge.
Different Network Topologies
• Single layer feed-forward networks
– Input layer projecting into the output layer

Input Output
layer layer
Different Network Topologies
• Recurrent networks
– A network with feedback, where some of its inputs are
connected to some of its outputs (discrete time).

Input Output
layer layer
Different Network Topologies
Multi-layer feed-forward networks
– One or more hidden layers.
– Input projects only from previous layers onto a
layer.typically, only from one layer to the next

2-layer or
1-hidden layer
fully connected
network

Input Hidden Output layer


layer layer
Applications of ANNs
• ANNs have been widely used in various domains for:
–Pattern recognition
–Function approximation
–Associative memory
-.......
Artificial Neural Networks
• Early ANN Models:
• –Perceptron, ADALINE, Hopfield Network
Current Models:
–Deep Learning Architectures
–Multilayer feedforward networks (Multilayer perceptrons)
–Radial Basis Function networks
–Self Organizing Networks
– ...
How to Decide on a Network Topology?
–# of input nodes?
• Number of features
–# of output nodes?
•Suitable to encode the output representation
–transfer function?
•Suitable to the problem
–# of hidden nodes?
•Not exactly known
Multilayer Perceptron
• Each layer may have different number of nodes and different
activation functions
• But commonly:
– Same activation function within one layer
• sigmoid/tanh activation function is used in the hidden units,
and
• sigmoid/tanh or linear activation functions are used in the
output units depending on the problem (classification-
sigmoid/tanh or function approximation_x0002_linear)
Thank you
Any Question?

You might also like