Artificial Neural Networ (ANN)
Artificial Neural Networ (ANN)
189
ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 2, Issue 1, July 2012
number of neuron units (Kurban and Yildirim, 2003; from the combination of many units in an appropriate way.
Yildirim and Uzmay, 2003). [2] [1] The ANN does not really solve the problem in a strictly
mathematical sense, but it demonstrates information
processing characteristics that give an approximate solution
III. BIOLOGICAL INSPIRATION to a given problem. The ANNs have been widely used in
Human brain is made up of a network of neurons that are complex non linear function mapping, image processing,
coupled with receptors and effectors. Receptors are called pattern recognition & classification & so on. Feed-forward
“dendrites” and effectors are called “axons”.[3] Fig. 1 networks are common type of neural networks. A feed
shows that the dendrites collects the signals from many forward network comprises an input layer, where the inputs
other neurons in a limited area; a cell body or soma that of the problem are received, hidden layers, where the
integrates collected signals & generates a response signal & relationship between the inputs & outputs are determined &
along branching axon that distributes the response through represented by synaptic weights, & an output layer which
contacts with dendrite trees of many other neurons. [4] emits the outputs of the problem. The neural feed forward
network is modeled with three basic elements: a) A set of
synapses characterized by synaptic weights, b) An adder or
linear combiner for summing the input signals. c) An
activation function for limiting the amplitude of the output
of neuron to some finite value. The input of the activation
function can be increased by using a bias term. Here, we
have made use of a certain ANN architecture known as the
multi-layer-feed-forward neural network or Multi Layer
Perceptron (MLP)[5] .
190
ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 2, Issue 1, July 2012
Table 1. Terminology of Neuron network shown in Fig. 4 is a 3-2-3-2 feed forward network;
Biological Terminology ANN Terminology it contains three nodes in the input layer (layer 0), two
nodes in the first hidden layer (layer 1), three nodes in the
Neuron Node/Unit/Cell/Neurode
second hidden layer (layer 2), and two nodes in the output
Synapse Connection/Edge/Link layer (layer 3). These networks, generally with no more
than four such layers, are among the most common neural
Synaptic Efficiency Connection Strength/Weight nets in use, so much so that some users identify the phrase
“neural networks” to mean only feed forward networks.
Firing Frequency Node Output Conceptually, nodes in successively higher layers abstract
successively higher level features from preceding layers. In
A. Mathematical Model the literature on neural networks, the term “feed forward”
When creating a functional model of the biological has been used sometimes to refer to layered or acrylic
neuron, there are three basic components of importance. networks. [8]
First, the synapses of the neuron are modeled as weights.
The strength of the connection between an input and a
neuron is noted by the value of the weight. Negative weight
values reflect inhibitory connections, while positive values
designate excitatory connections [Haykin]. The next two
components model the actual activity within the neuron cell.
An adder sums up all the inputs modified by their respective
weights. This activity is referred to as linear combination.
Finally, an activation function controls the amplitude of the
output of the neuron. An acceptable range of output is Fig. 4 Feed Forward Networks
usually between 0 and 1, or -1 and 1. Mathematically, this
process is described in the figure, C. Neural Learning
It is reasonable to conjecture that neurons in an animal’s
brain are “hard wired.” It is equally obvious that animals,
especially the higher order animals, learn as they grow.
How does this learning occur? What are possible
mathematical models of learning? In this section, we
summarize some of the basic theories of biological learning
and their adaptations for artificial neural networks. In
artificial neural networks, learning refers to the method of
modifying the weights of connections between the nodes of
a specified network. Learning is the process by which the
random-valued parameters (Weights and bias) of a neural
network are adapted through a continuous process of
simulation by the environment in which network is
embedded. Learning rate is defined as the rate at which
network gets adapted. Type of learning is determined by the
Fig. 3 Mathematical Model manner in which parameter change takes place. Learning
may be categorized as supervised learning, unsupervised
From this model the interval activity of the neuron can be learning and reinforced learning. In Supervised learning, a
shown to be, teacher is available to indicate whether a system is
performing correctly, or to indicate a desired response, or to
validate the acceptability of a system’s responses, or to
indicate the amount of error in system performance. This is
(1) in contrast with unsupervised learning, where no teacher is
available and learning must rely on guidance obtained
heuristically by the system examining different sample data
The output of the neuron, Yk, would therefore be the
or the environment. Learning is similar to training i.e. one
outcome of some activation function on the value of Vk. [7]
has to learn something which is analogous to one has to be
B. Feed Forward Networks
This is a subclass of acrylic networks in which a trained. A neural network has to be configured such that the
connection is allowed from a node in layer i only to nodes application of a set of inputs produces (either 'direct' or via
in layer i+1 as shown in Fig.4. These networks are a relaxation process) the desired set of outputs. Various
succinctly described by a sequence of numbers indicating methods to set the strengths of the connections exist. One
the number of nodes in each layer. For instance, the way is to set the weights explicitly, using a priori
191
ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 2, Issue 1, July 2012
knowledge. Another way is to 'train' the neural network by 3 .Reinforced Learning
feeding it teaching patterns and letting it change its weights Reinforcement Learning is type of learning may be
according to some learning rule. We can categorize the considered as an intermediate form of the above two types
learning situations in two distinct sorts. These are of learning. Here the learning machine does some action on
1. Supervised Learning the environment and gets a feedback response from the
Supervised learning or Associative learning in which the environment. The learning system grades its action good
network is trained by providing it with input and matching (rewarding) or bad (punishable) based on the environmental
output patterns. These input-output pairs can be provided by response and accordingly adjusts its parameters. Generally,
an external teacher, or by the system which contains the parameter adjustment is continued until an equilibrium state
neural network (self-supervised). Example: An occurs, following which there will be no more changes
archaeologist discovers a human skeleton and has to in its parameters. The self organizing neural learning may
determine whether it belonged to man or woman. In doing be categorized under this type of learning. [7]
this, the archaeologist is guided by many past examples of D. Back Propagation Network
male and female skeletons. Examination of these past The back propagation algorithm (Rumelhart and
examples (called the training set) allows the archaeologist McClelland, 1986) is used in layered feed-forward ANNs.
to learn about the distinctions between male and female This means that the artificial neurons are organized in
skeletons. This learning process is an example of supervised layers, and send their signals “forward”, and then the errors
learning, and the result of learning process can be applied to are propagated backwards. The network receives inputs by
determine whether the newly discovered skeleton belongs to neurons in the input layer, and the output of the network is
man or woman. given by the neurons on an output layer. There may be one
or more intermediate hidden layers. The back propagation
algorithm uses supervised learning, which means that we
provide the algorithm with examples of the inputs and
outputs we want the network to compute, and then the error
(difference between actual and expected results) is
calculated. The idea of the back propagation algorithm is to
reduce this error, until the ANN learns the training data.
The training begins with random weights, and the goal is to
adjust them so that the error will be minimal. [9] Back
propagation network has gained importance due to the
Fig. 5 Supervised Learning shortcomings of other available networks. The network is a
multi layer network (multi layer perception) that contains at
least one hidden layer in addition to input and output layers.
2 .Unsupervised Learning
Unsupervised learning or Self-organization in which an Number of hidden layers & numbers of neurons in each
(output) unit is trained to respond to clusters of pattern hidden layer is to be fixed based on application, the
within the input. In this paradigm the system is supposed to complexity of the problem and the number of inputs and
discover statistically salient features of the input population. outputs. Use of non-linear log-sigmoid transfer function
Unlike the supervised learning paradigm, there is no a priori enables the network to simulate non-linearity in practical
set of categories into which the patterns are to be classified; systems. Due to this numerous advantages, back
rather the system must develop its own representation of the propagation network is chosen for present work. [3]
input stimuli. Example: In a different situation, the Implementation of back propagation model consists of two
archaeologist has to determine whether a set of skeleton phases. First phase is known as training while the second
fragments belong to the same dinosaur species or need to be phase is called Testing. Training, in back propagation is
based on gradient decent rule that tends to adjust weights
differentiated into different species. For this task, no
and reduce system error in the network. Input layer has
previous data may be available to clearly identify the
species for each skeleton fragment. The archaeologist has to neurons equal in number to that of the inputs. Similarly,
determine whether the skeletons (that can be reconstructed output layer neurons are same in the number as number of
from the fragments) are sufficiently similar to belong to the outputs. Number of hidden layer neurons is deciding by trial
same species, or if the differences between these skeletons and error method using the experimental data. [10]
are large enough to warrant grouping them into different E.ANN Development & Implementation
In this work, both ANN implementation & training is
species. This is an unsupervised learning process, which
developed, using the neural network toolbox of Mat Lab.
involves estimating the magnitudes of differences between
Different ANNs are build rather than using one large ANN
the skeletons. One archaeologist may believe the skeletons
including all the output variables. This strategy allowed for
belong to different species, while another may disagree, and
better adjustment of the ANN for each specific problem,
there is no absolute criterion to determine who is correct.
including the optimization of the architecture for each
output.
192
ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 2, Issue 1, July 2012
F. ANN Training & Prediction quality back propagation network. The network is capable of
One of the most relevant aspects of a neural network is predicting the parameters by experimental system. The
its ability to generalize, that is, to predict cases that are not network has parallel structure and fast learning capacity.
included in the training set. One of the problems that occur The collected experimental data such as speed, load, &
during neural network training is called over fitting. The values of pressure distribution etc. are also employed as
error on the training set is driven to a very small value, but training and testing data for an artificial neural network.
when new data is presented to the network, the error is The neural network is a feed forward three layered network.
large. The network has memorized the training examples, Quick propagation algorithm is used to update the weight of
but it has not learned to generalize to new situations. One the network during the training. The ANN has a superior
method for improving network generalization is to use a performance to follow the desired results of the system and
network that is just large enough to provide an adequate fit. is employed to analyze such systems parameters in practical
The larger network you use the more complex functions the applications.
network can create. There are two other methods for
VII. ACKNOWLEDGEMENT
improving generalization that are implemented in Mat Lab
The authors would like to acknowledge & thanks to Dr.
Neural Network Toolbox software: regularization & early
Y.R. Kharde, Principal, and Shree. Saibaba Institute of
stopping. The typical performance function used for
Engineering Research & Allied Sciences, Rahata, Prof. S.B.
training feed forward neural networks is the mean sum of
Belkar, Asso. Prof., P.R.E.C. Loni & Prof. R.R. Navthar,
squares of the network errors,
Asstt. Prof., P.D.V.V.P. COE Ahmednagar for their
immense help in this work.
(2)
193
ISSN: 2277-3754
ISO 9001:2008 Certified
International Journal of Engineering and Innovative Technology (IJEIT)
Volume 2, Issue 1, July 2012
& Development (2011), (Third ICCTD 2011) , Page no. -
717-722
[11] R.R.Navthar & Dr. N.V. Halegowda, “Pressure Distribution
Analysis of Hydrodynamic Journal Bearings using Artificial
Neural Network”, ASME Digital Library, e-books,
International Conference on Computer & Automation
Engineering, (Fourth ICCAE 2012), Page no. -153-161
AUTHOR BIOGRAPHY
Prof.A.D. Dongare
M.E. (Design Engg), Ph.D.(App)
P.R.E.C. Loni Rahata Ahmednagar.
Area of Research: Design and Tribology.
Professional Membership: IE (I).
Prof.R.R. Kharde
M.E. (Tribology), Head, Dept. of
Mechanical Engineering.
P.R.E.C. Loni Rahata Ahmednagar.
Area of Research: Design and Tribology.
Professional Membership: IE (I).
A.D. Kachare
194