0% found this document useful (0 votes)
111 views

Term Paper AlphaXII

The document is a term paper on artificial neural networks. It discusses the background of artificial neural networks, including how they were inspired by the human brain and central nervous system. It also provides an overview of how artificial neural networks work, noting that they consist of interconnected layers of artificial neurons that can perform parallel processing similar to the human brain. The paper will examine the development of artificial neural networks and compare biological and artificial neural network models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views

Term Paper AlphaXII

The document is a term paper on artificial neural networks. It discusses the background of artificial neural networks, including how they were inspired by the human brain and central nervous system. It also provides an overview of how artificial neural networks work, noting that they consist of interconnected layers of artificial neurons that can perform parallel processing similar to the human brain. The paper will examine the development of artificial neural networks and compare biological and artificial neural network models.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Term paper on

“ARTIFICIAL NEURAL NETWORKS”

In partial fulfillment of the degree of

Bachelor of Technology

In

Computer Science & Engineering

Submitted By:

SNEHASIS MAJUMDAR

Enroll no: A20405216001

Amity University of Engineering & Technology


AMITY UNIVERSITY RAJASTHAN
ACKNOWLEDGEMENT

First of all, I am grateful to the almighty for conceding me the health, vigor, energy
and ability to contemplate and motivation to make this project work a success. I am deeply
indebted to my project guide Dr. Deepak Panwar, Department of Computer Science
Engineering, Amity School of Engineering & Technology, Amity University Rajasthan
whose experience stimulating suggestions and encouragement helped me all the time of
research and the time of research for preparing this project.

I take this opportunity to express my deepest gratitude to Ms. Pooja Parnami,


Coordinator CSE, Amity School of Engineering & Technology, Amity University Rajasthan.
I would like to thank Dr. Deepak Panwar, Department of CSE for valuable cooperation, and
suggestions in the work.

I owe my deepest gratitude to Dr. DD Shukla, Director, Amity School of Engineering


& Technology, Amity University Rajasthan for their motivation, encouragement and support.

Finally, I owe my deepest gratitude to my family for understanding, encouraging and


their love towards me throughout this work without which successful completion of my
project would have not been possible.

2
DECLARATION

I do, hereby, declare that the project entitled “ARTIFICIAL NEURAL NETWORK”
is an authentic work developed under the guidance of Dr. Deepak Panwar and submitted for
evaluation as a project work for degree of Bachelor of Technology (Computer Science &
Engineering) at Amity School of Engineering & Technology, Amity University Rajasthan.

I also declare that, any of the contents incorporated in this report have not been
submitted in any form for the award of my degree or diploma of any other institution or
university.

_________________ _________________
Dr. Deepak Panwar Snehasis Majumdar

ASET, AUR A20405216001

3
ABSTRACT

Artificial neural networks commonly referred as the neural networks are the information or
signal processing mathematical model that is based on the biological neuron.

A neural network is a complex structure which consist a group of interconnected neurons


which provides a very exciting alternatives for complex problem solving and other
application which can play important role in today’s computer science field so researchers
from the different discipline are designing the artificial neural networks to solve the problems
of pattern recognition, prediction, optimization, associative memory and control.

In this paper we have presented the basic study of the artificial neural network, its
characteristics and its applications.

4
CONTENTS
Acknowledgement
Declaration
Abstract

1. Introduction

2. Artificial Neural Networks


2.1 Background
2.2 Working

3. Development of ANN
3.1 McCulloch-Pitts Neuron
3.2 The perceptron
3.3 Neural network topology
3.4 Classical ANN models
3.5 Back Propagation

4. Biologically Connectionist System


4.1 Intraneuron signalling
4.2 Interneuron signalling
4.3 A biologically plausible ANN model proposal

5. ANN Models
5.1 Network Functions

6. Characteristics of Neural Network


6.1 The Network Structure
6.2 Ability of Parallel Processing
6.3 Distributed Memory
6.4 Fault Tolerance Ability
6.5 Collective Solution
6.6 Learning Ability

7. Advantages of Neural Network

8. Limitations of Neural Network

9. Applications of Neural Network

10. Conclusion

References

5
1. INTRODUCTION
The study of brain is an interesting area since a long time. With advancement in the field of
electronics and computer science, it was the assumed that we can use this natural way of this
thinking process of brain to design some artificial intelligence system.

The first step toward artificial intelligence came into existence in 1943 when Warren
McCulloch, a neurophysiologist, and a mathematician, Walter Pitts, wrote a paper on how
neurons work. Mathematical analysis has solved some of the mysteries posed by the new
models but has left many questions for future investigations.

Artificial Neural Networks (ANNs) are based on an abstract and simplified view of the
neuron. Artificial neurons are connected and arranged in layers to form large networks, where
learning and connections determine the network function. Connections can be formed
through learning and do not need to be ’programmed’.

There is no need to say, the study of neurons, their interconnections, and their role as the
brain’s elementary building blocks is one of the most dynamic and important research fields
in modern world of electronics and computer science.

Recent ANN models lack many physiological properties of the neuron, because they are more
oriented to computational performance than to biological credibility.

2. ARTIFICIAL NEURAL NETWORKS

In electronics engineering and related fields, artificial neural networks (ANNs) are
mathematical or computational models that are inspired by a human’s central nervous system
(in particular the brain) which is capable of machine learning as well as pattern recognition.
Whereas animal’s nervous system is more complex than the human so the system designed
like this will be able to solve more complex problems. Artificial neural networks are
generally presented as systems of highly interconnected "neurons" which can compute values
from inputs.

Neural Network is just like a website network of interconnected neurons which can be
millions in number. With the help of these interconnected neurons all the parallel processing
is being done in body and the best example of Parallel Processing is human or animal’s body.
Currently, artificial neural networks are the clustering of the primitive artificial neurons. This
clustering occurs by creating layers which are then connected to one another. How these
layers connect is the other part of the "art" of engineering networks to resolve the complex
problems of the real world. So neural networks, with their stronger ability to derive meaning
from complicated or imprecise data, can be used to extract patterns and detect trends that are
too complex to be noticed by either humans or other computer techniques.

6
Figure 1: Simple Neural Network

2.1 Background

The examination of the central nervous system of human brain was the inspiration of neural
networks. In an Artificial Neural Network, simple artificial nodes, known as "neurons",
"processing elements" or units", are connected together to form a network which is called a
biological neural network.

There is no single formal definition of an artificial neural network. However, a class of


statistical or mathematical or computational models may commonly be called "Neural
Networks" if they possess the following characteristics:

 Consist of sets of adaptive weights, i.e. numerical parameters that are tuned by a
learning algorithms, and
 Capable of approximating non-linear functions of their inputs.

The adaptive weights are conceptually connection strengths between neurons, which are
activated during training and prediction.

7
Figure 2: Human Neuron

Neural networks are similar to biological neural networks in performing functions


collectively and in parallel by the units, rather than there being a clear delineation of subtasks
to which various units are assigned. The term "neural network" usually refers to models
employed in statistics, cognitive psychology and artificial intelligence. Neural network
models which emulate the central nervous system are part of theoretical neuroscience and
computational neuroscience.

2.2 Working of Neural Networks

The working of neural networks revolves around the myriad of ways these individual neurons
can be clustered together. This clustering occurs in the human mind in such a way that
information can be processed in a dynamic, interactive, and self-organizing way.

Biologically, neural networks are constructed in a three-dimensional world from microscopic


components. These neurons seem capable of nearly unrestricted interconnections. That is not
true of in the case of any proposed, or existing, man-made network. Integrated circuits, using
current technology, are two-dimensional devices with a limited number of layers for
interconnection.

8
This physical reality restrains the types, and scope, of artificial neural networks that can be
implemented in silicon. Currently, neural networks are the simple clustering of the primitive
artificial neurons. This clustering occurs by creating layers which are then connected to one
another. How these layers connect is the other part of the "art" of engineering networks to
resolve real world problems.

3. DEVELOPMENT OF ANN

We discuss the development of ANN models starting from McCulloch Pitts neuron to the
Back Propagation model.

The type of bifurcation determines the most fundamental computational properties of


neurons, such as the class of excitability, the existence or nonexistence of the activation
threshold, all-or-none action potentials (spikes), sub-threshold oscillations, bi-stability of rest
and spiking states, whether the neuron is an integrator or resonator etc.

3.1 McCulloch-Pitts neuron

McCulloch-Pitts neuron (1943) was the first mathematical model. Its properties:

• Neuron activity is an "all-or-none" process;

• A certain fixed number of synapses are excited within a latent addition period in
order to excite a neuron: independent of previous activity and of neuron position;

• Synaptic delay is the only significant delay in nervous system;

• Activity of any inhibitory synapse prevents neuron from firing;

• Network structure does not change with time.

The McCulloch-Pitts neuron represents a simplified mathematical model for the neuron,
where xi is the i-th binary input and wi is the synaptic (connection) weight associated with
the input xi . The computation occurs in soma (cell body). For a neuron with p inputs:
𝑝

(𝑎) = ∑ 𝑥𝑖𝑤𝑖
𝑖=1

with x0 = 1 and w0 =β = −θ. β is the bias and q is the activation threshold. See figures 1 and 2.
There are p binary inputs in the schema of figure 2. Xi is the i-th input, Wi is the connection
(synaptic) weight associated with input i. The synaptic weights are real numbers, because the
synapses can inhibit (negative signal) or excite (positive signal) and have different intensities.
The weighted inputs (Xi ×Wi) are summed in the cell body, providing a signal a. After that,
the signal a is input to an activation function ( f ), giving the neuron output.

9
The activation function can be:

 hard limiter,
 threshold logic, and
 sigmoid, which is considered the biologically more plausible activation
function.

3.2 The perceptron

Rosenblatt’s perceptron takes a weighted sum of neuron inputs, and sends output 1 (spike) if
this sum is greater than the activation threshold. It is a linear discriminator: given 2 points, a
straight line is able to discriminate them. For some configurations of m points, a straight line
is able to separate them in two classes.

Figure 3: Set of linearly separable points Figure 4: set of non-linearly separable points

The limitations of the perceptron is that it is an one-layer feed-forward network (non-


recurrent); it is only capable of learning solution of linearly separable problems; and its
learning algorithm (delta rule) does not work with networks of more than one layer.

3.3 Neural network topology

In cerebral cortex, neurons are disposed in columns, and most synapses occur between
different columns. See the famous drawing by Ramón y Cajal. In the extremely simplified
mathematical model, neurons are disposed in layers (representing columns), and there is
communication between neurons in different layers.

10
Figure 5: Drawing of Neurons in cerebellum

There are A + 1 input units, B + 1 hidden units, and C output units. w1 and w2 are the
synaptic weight matrices between input and hidden layers and between hidden and output
layers, respectively. The “extra” neurons in input and hidden layers, labelled 1, represent the
presence of bias: the ability of the network to fire even in the absence of input signal.

3.4 Classical ANN models

Classical artificial neural networks models are based upon a simple description of the neuron,
taking into account the presence of presynaptic cells and their synaptic potentials, the
activation threshold, and the propagation of an action potential. So, they represent
impoverished explanation of human brain characteristics.

As advantages, we may say that ANNs are naturally parallel solution, robust, fault tolerant,
they allow integration of information from different sources or kinds, are adaptive systems,
that is, capable of learning, they show a certain autonomy degree in learning, and display a
very fast recognizing performance

11
Figure 6: A 3-layer neural network

And there are many limitations of ANNs. Among them, it is still very hard to explain its
behaviour, because of lacking of transparency, their solutions do not scale well, and they are
computationally expensive for big problems, and yet very far from biological reality.

ANNs do not focus on real neuron details. The conductivity delays are neglected. The output
signal is either discrete (e.g., 0 or 1) or a real number (e.g., between 0 and 1). The network
input is calculated as the weighted sum of input signals, and it is transformed in an output
signal via a simple function (e.g., a threshold function).

Andy Clark proposes three types of connectionism: (1) the first-generation consisting of
perceptron and cybernetics of the 1950s. They are simple neural structures of limited
applications; (2) the second generation deals with complex dynamics with recurrent networks
in order to deal with spatio-temporal events; (3) the third generation takes into account more
complex dynamic and time properties. For the first time, these systems use biological
inspired modular architectures and algorithms.

We may add a fourth type: a network which considers populations of neurons instead of
individual ones and the existence of chaotic oscillations, perceived by electroencephalogram
(EEG) analysis. The K-models are examples of this category.

12
Table 1: Difference between Von Neumann Computer & Biological Neural Systems

3.5 Back-propagation

Back-propagation (BP) is a supervised algorithm for multilayer networks. It applies the


generalized delta rule, requiring two passes of computation: (1) activation propagation
(forward pass), and (2) error back propagation (backward pass). Back-propagation works in
the following way: it propagates the activation from input to hidden layer, and from hidden to
output layer; calculates the error for output units, then back propagates the error to hidden
units and then to input units.

BP has a universal approximation power, that is, given a continuous function, there is a two-
ayer network (one hidden layer) that can be trained by Back-propagation in order to
approximate as much as desired this function. Besides, it is the most used algorithm.

Although Back-propagation is a very known and most used connectionist training algorithm,
it is computationally expensive (slow), it does not solve satisfactorily big size problems, and
sometimes, the solution found is a local minimum - a locally minimum value for the error
function.

BP is based on the error back propagation: while stimulus propagates forwardly, the error
(difference between the actual and the desired outputs) propagates backwardly. In the
cerebral cortex, the stimulus generated when a neuron fires crosses the axon towards its end
in order to make a synapse onto another neuron input. Suppose that BP occurs in the brain; in
this case, the error must have to propagate back from the dendrite of the postsynaptic neuron
to the axon and then to the dendrite of the presynaptic neuron. It sounds unrealistic and
improbable. Synaptic “weights” have to be modified in order to make learning possible, but
certainly not in the way BP does. Weight change must use only local information in the
synapse where it occurs. That’s why BP seems to be so biologically implausible.

13
4. BIOLOGICALLY CONNECTIONIST SYSTEM
In the expectation phase1, when input x, representing the first word of a sentence through
semantic micro features, is presented to input layer a, there is propagation of these stimuli to
the hidden layer b (bottom-up propagation). There is also a propagation of the previous actual
output op, which is initially empty, from output layer g back to the hidden layer b (top-down
propagation). 2 Then, hidden expectation activation (he) is generated for each and every one
of the B hidden nits, based on inputs and previous output stimuli op (sum of the bottom-up
and top-down propagations - through the sigmoid logistic activation function s). Then, these
hidden signals propagate to the output layer g (step 4), and an actual output o is obtained for
each and every one of the C output units, through the propagation of the hidden expectation
activation to the output layer whij are the connection (synaptic) weights between input (i) and
hidden (j) units, and wojk are the connection (synaptic) weights between hidden (j) and output
(k) units3.

Figure 7: The Expectation Phase Figure 8: The Outcome Phase

In the outcome phase, input x is presented to input layer a again; there is propagation to
hidden layer b (bottom-up). After this, expected output y (step 2) is presented to the output
layer and propagated back to the hidden layer b (top-down) (step 3), and a hidden outcome
activation (ho) is generated, based on inputs and on expected outputs. For the other words,
presented one at a time, the same procedure (expectation phase first, then outcome phase) is
repeated. Recall that the architecture is bi-directional, so it is possible for the stimuli to
propagate either forwardly or backwardly.

14
In order to make learning possible the synaptic weights are updated through the delta rule 4,
considering only the local information made available by the synapse. The learning rate h
used in the algorithm is considered an important variable during the experiments.

4.1 Intraneuron signalling

The Spanish Nobel laureate neuroscientist Santiago Ramón y Cajal, established at the end of
the nineteenth century, two principles that revolutionized neuroscience: the Principle of
connectional specificity, which states that “nerve cells do not communicate indiscriminately
with one another or form random networks;” and the Principle of dynamic polarization,
which says “electric signals inside a nervous cell flow only in a direction: from neuron
reception (often the dendrites and cell body) to the axon trigger zone.” Intraneuron signalling
is based on the principle of dynamic polarization. The signalling inside the neuron is
performed by four basic elements: receptive, trigger, signalling, and secretor. The Receptive
element is responsible for input signals, and it is related to the dendritic region. The Trigger
element is responsible for neuron activation threshold, related to the soma. The Signalling
element is responsible for conducting and keeping the signal and it is related to the axon. And
the secretor element is responsible for signal releasing to another neuron, so it is related to the
presynaptic terminals of the biological neuron.

4.2 Interneuron signalling

Electrical and chemical synapses have completely different morphologies. At electrical


synapses, transmission occurs through gap junction channels (special ion channels), located
in the pre and postsynaptic cell membranes. There is a cytoplasmic connection between cells.
Part of electric current injected in presynaptic cell escapes through resting channels and
remaining current is driven to the inside of the postsynaptic cell through gap junction
channels. At chemical synapses, there is a synaptic cleft, a small cellular separation between
the cells. There are vesicles containing neurotransmitter molecules in the presynaptic terminal
and when action potential reaches these synaptic vesicles, neurotransmitters are released to
the synaptic cleft.

15
4.3. A biologically plausible ANN model proposal

We present here a proposal for a biologically plausible model based on the microscopic level.
This model in intended to present a mechanism to generate a biologically plausible ANN
model and to redesign the classical framework to encompass the traditional features, and
labels that model the binding affinities between transmitters and receptors. This model
departs from a classical connectionist model and is defined by a restricted data set, which
explains the ANN behaviour. Also, it introduces T, R, and C variables to account for the
binding affinities between neurons (unlike other models).

The following feature set defines the neurons:

N = {(w), q, g, T, R, C}

where:

• w represents the connection weights,


• q is the neuron activation threshold,
• g stands for the activation function,

• T symbolizes the transmitter


• R the receptor, and
• C the controller.

q, g, T, R, and C encode the genetic information, while T, R, and C are the labels, absent in
other models. This proposal follows Ramón y Cajal’s principle of connectional specificity,
that states that each neuron is connected to another neuron not only in relation to {w}, q, and
g, but also in relation to T, R, and C; neuron i is only connected to neuron j if there is binding
affinity between the T of i and the R of j. Binding affinity means compatible types, enough
amount of substrate, and compatible genes. The combination of T and R results in C: C can
act over other neuron connections.

The ordinary biological neuron presents many dendrites usually branched, which receive
information from other neurons, an axon, which transmits the processed information, usually
by propagation of an action potential. The axon is divided into several branches, and makes
synapses onto the dendrites and cell bodies of other neurons. Chemical synapse is
predominant is the cerebral cortex, and the release of transmitter substance occurs in active
zones, inside presynaptic terminals. Certain chemical synapses lack active zones, resulting in
slower and more diffuse synaptic actions between cells. The combination of a
neurotransmitter and a receptor makes the postsynaptic cell releases a protein.

16
4.3.1. The labels and their dynamic behaviour

In order to build the model, it is necessary to set the parameters for the connectionist
architecture. For the network genesis, the parameters are:

• Number of layers;
• Number of neurons in each layer;
• Initial amount of substrate (transmitters and receptors) in each layer; and
• Genetics of each layer:
• Type of transmitter and its degree of affinity,
• Type of receptor and its degree of affinity, and
• Genes (name and gene expression).

For the evaluation of controllers and how they act, the parameters are:

• Controllers can modify:


• The degree of affinity of receptors;
• The initial substrate storage; and
• The gene expression value (mutation).

The specifications stated above lead to an ANN with some distinctive characteristics: (1)
each neuron has a genetic code, which is a set of genes plus a gene expression controller; (2)
the controller can cause mutation, because it can regulate gene expression; (3) the substrate
(amount of transmitter and receptor) is defined by layer, but it is limited, so some
postsynaptic neurons are not activated: this way, the network favours clustering.

Also, the substrate increase is related to the gene specified in the controller, because the
synthesis of a new transmitter occurs in the pre-synaptic terminal (origin gene). The
modification of the genetic code, that is, mutation, as well as the modification of the degree
of affinity of receptors, however, is related to the target gene. The reason is that the
modulation function of controller is better explained at some distance of the emission of
neurotransmitter, therefore at the target.

4.3.2. A network simulation

In table, a data set for a five-layer network simulation is presented. For the sake of
simplicity, all degrees of affinity are set at 1 (the degree of affinity is represented by a real
number in the range [0..1]; so that the greater the degree of affinity is the stronger the
synaptic connection will be).

17
Table 2: Network Stimulation table for which 5 Layer Neuron Network is formed

In figure, one can notice that every unit in layer 1 (the input layer) is linked to the first nine
units in layer 2 (first hidden layer). The reason why not every unit in layer 2 is connected to
layer 1, although the receptor of layer 2 has the same type of the transmitter of layer 1, is that
the amount of substrate in layer 1 is eight units. This means that, in principle, each layer-
1unit is able to connect to at most eight units. But controller 1, from layer 1 to 2, incremented
by 1 the amount of substrate of the origin layer (layer 1). The result is that each layer 1 unit
can link to nine units in layer 2. Observe that from layer 2 to layer 3 (the second hidden layer)
only four layer-2 units are connected to layer 3, because also of the amount of substrate of
layer 3, which is 4.

As a result of the compatibility of layer-2 transmitter and layer-5 receptor, and the existence
of remaining unused substrate of layer 2, one could expect that the first two units in layer 2
should connect to the only unit in layer 5 (the output unit).

Figure 9: A 5 Layer Neuron Network for the data in above table

However, this does not occur because their genes are not compatible. Although gene
compatibility exists, in principle, between layers 1 and 4, their units do not connect to each

18
other because there is no remaining substrate in layer 1 and because controller 1 between
layers 1 and 4 modified the gene expression of layer 4, making them incompatible. The
remaining controller has the effect of modifying the degrees of affinity of receptors in layer 3
(target). Consequently, the connections between layers 2 and 3 became weakened
(represented by dotted lines). Notice that, in order to allow connections, in addition to the
existence of enough amount of substrate, the genes and the types of transmitters and receptors
of each layer must be compatible.

5. ANN MODELS
Neural network models in artificial intelligence are essentially simple mathematical models
defining a function f: X->Y or a distribution over X or both X and Y, but sometimes models
are also intimately associated with a particular learning algorithm or learning rule.

A common use of the ANN model really means the definition of a class of such functions
(where members of the class are obtained by varying parameters, connection weights, or
specifics of the architecture such as the number of neurons or their connectivity).

5.1 Network function

The word network in the term 'artificial neural network' refers to the interconnections
between the neurons in the different layers of each system. An example system has three
layers. The first layer has input neurons which send data via synapses to the second layer of
neurons, and then via more synapses to the third layer of output neurons. More complex
systems will have more layers of neurons with some having increased layers of input neurons
and output neurons. The synapses store parameters called "weights" that manipulate the data
in the calculations. An ANN is typically defined by three types of parameters:

a) The interconnection pattern between the different layers of neurons.

b) The learning process for updating the weights of the interconnections.

c) The activation function that converts a neuron's weighted input to its output
activation.

19
Figure 10: Nonlinear model of Neuron

Figure 11: Multilayer Artificial Network

20
6. CHARACTERISTICS OF NEURAL NETWORK
Basically Computers are good in calculations that takes inputs process then and gives the
result as per the calculations which is done by using the particular Algorithm which are
programmed in the software’s but ANN uses its own rules, the more decisions they make, the
better decisions may become.

The Characteristics are basically those which should be present in intelligent System like
robots and other Artificial Intelligence Applications.

There are six characteristics of Artificial Neural Network which are basic and important for
this technology which are showed with the help of diagram:

Network
Structure

Fault Parallel
Tolerance Processing
Characteristics
of Artificial
Neural
Network

Distributed Collective
Memory Solution

Learning
Ability

Figure 12: Characteristics of ANN

6.1 The Network Structure

The Network Structure of ANN should be simple and easy. There are basically two types of
structures recurrent and non-recurrent structure.

The Recurrent Structure is also known as Auto associative or Feedback Network and the Non
Recurrent Structure is also known as Associative or feed-forward Network.

21
In Feed forward Network, the signal travel in one way only but in Feedback Network, the
signal travel in both the directions by introducing loops in the network. As shown in the
figures below:

Figure 13: Feed Forward Network

Figure 14: Feedback Network

22
6.2 Ability of Parallel Processing

ANN is only the concept of parallel processing in the computer field. Parallel Processing is
done by the human body in human neurons that is very complex but by applying basic and
simple parallel processing techniques we implement it in ANN like Matrix and some matrix
calculations.

6.3 Distributed Memory

ANN is very vast system so single unit memory or centralized memory cannot fulfill the need
of ANN system so in this condition we need to store information in weight matrix which
form a long term memory because information is stored as patterns throughout the network
structure.

6.4 Fault Tolerance Ability

ANN is a very complex system so it is necessary that it should be a fault tolerant. Because if
any part becomes fails it will not affect the system as much but if the all parts fails at the
same time the system will fails completely.

6.5 Collective Solution

ANN is an interconnected system the output of a system is a collective output of various


input so the result is summation of all the outputs which comes after processing various
inputs. The Partial answer is worthless for any user in the ANN System.

6.6 Learning Ability

In ANN most of the learning rules are used to develop models of processes, while adopting
the network to the changing environment and discovering useful knowledge. These Learning
methods are Supervised, Unsupervised and Reinforcement Learning.

7. ADVANTAGES OF NEURAL NETWORKS


The neural networks have a lot of applications here we have discussed some of the most
important applications of the neural networks:

 Adaptive learning: A neural networks have the ability to learn how to do things.

23
 Self-Organisation: A neural network or ANN can create its own representation of the
information it receives during learning.

 Real Time Operation: In neural network or ANN computations can be carried out in
parallel.

 Pattern recognition is a powerful technique for the data security. Neural networks
learn to recognize the patterns which exist in the data set.

 The system is developed by learning rather than programming. Neural networks teach
themselves the patterns in the data freeing the analyst for more interesting work.

 Neural networks are flexible in a changing environment. Although neural networks


may take some time to learn a sudden drastic change but they are excellent in
adapting the constantly change in information.

 Neural networks can build informative models whenever conventional approaches


fail. Because neural networks can handle very complex interactions they can easily
model data which is too difficult to model with traditional approaches such as
inferential statistics or programming logic.

 Performance of neural networks is very good and better on most of the problems. The
neural networks can build models that are more complex in the structure of the data in
significantly less time.

8. LIMITATIONS OF NEURAL NETWORK


In this world everything has some merits and demerits, so the neural network system also has
some merits and demerits. The limitations of ANN are:

 ANN or Neural Networks is not a daily life problem solver.

 There is no structured methodology available.

 There is no single standardized paradigm for Neural Networks development.

 The Output Quality of an ANN can be unpredictable.

 Many ANN Systems does not describe how they solve the problems.

 Nature of ANN is like a Black box.

24
9. APPLICATION OF NEURAL NETWORKS
The real time applications of Artificial Neural Networks are:

 Functional approximation, including time series prediction and modelling.

 Call control- answer an incoming call (speaker-ON) with a swipe of the hand while
driving.

 Classification, including pattern and sequence recognition, pattern detection and


sequential decision making.

 Skip tracks or control volume on your media player using simple hand motions.

 Data processing, including filtering, clustering, blind signal separation and


compression.

 Scroll Web Pages, or in an eBook with simple left and right hand gestures, this is
ideal when touching the device is a barrier such as wet hands are wet, with gloves,
dirty etc.

 Application areas of ANNs include system identification and control.

 (Vehicle control, process control), game-playing and decision making (chess, racing),
pattern recognition (radar systems, face identification, object recognition, etc.),
sequence recognition (gesture, speech, handwritten text recognition), medical
diagnosis, financial applications, data mining (or knowledge discovery in databases,
"KDD").

 Another interesting use is when using the Smartphone as a media hub; a user can dock
the device to the TV and watch content from the device- while controlling the content
in a touch-free manner from a far.

 If your hands are wet or dirty or a person hates smudges, touch-free controls are a
benefit.

10. CONCLUSION
In this paper we discussed about the artificial neural network, working of neural networks,
characteristics of ANN, its advantages, limitations and applications of ANN. There are
various advantages of ANN over conventional approaches. Depending on the nature of the
application and strength of the internal data patterns you can generally expect a network to

25
train quite well. This applies to problems where the relationships may be quite dynamic or
non-linear. By studying Artificial Neural Network we had concluded that as the technology is
increasing the need of Artificial Intelligence is also increasing because of parallel processing,
because by using parallel processing we can do more than one task at a time. So Parallel
Processing is needed in this present time because with the help of parallel processing we can
save more and more time and money in any task related to electronics, computers and
robotics. If we talk about the Future work we can say that we have to develop more
algorithms and programs so that we can remove the limitations of the Artificial Neural
Network and can make it more and more useful for the various kinds of applications. If the
Artificial Neural Network concept is combined with the Computational Automata, FPGA and
Fuzzy Logic we will definitely solve some of the limitations of neural network technology.

REFERENCES

 Haykin S., “Neural Networks A Comprehensive Foundation”, 2nd edition, Pearson


Education, 1999.

 Ms. Sonali. B. Maind et. al., Research Paper on Basic of Artificial Neural Network,
International Journal on Recent and Innovation Trends in Computing and
Communication Volume: 2 Issue: 1 | January 2014

 Vidushi et al., International Journal of Advanced Research in Computer Science and


Software Engineering 2 (10), October- 2012, pp.278-284

 About Feed Back Network from website http://www.idsia.ch/ ~juergen/rnn.html .

 Sucharita Gopal, “Artificial Neural Networks for Spatial Data Analysis”, Boston,
1988.

 Eldon Y. Li, “Artificial Neural Networks and their Business Applications”, Taiwan,
1994. FLEXChip Signal Processor (MC68175/D), Motorola, 1996.

 Christos Stergiou and Dimitrios Siganos, “Neural Networks”.

 About Neural Network from website http://en.wikipedia.org / wiki/Neural_network .

 Girish Kumar Jha, “Artificial Neural Network and its Applications”, IARI New Delhi.

 Image of a Neuron form website http://transductions.net/ 2010/02/04/313/neurons/

 Ugur HALICI, “ Artificial Neural Networks”, Chapter 1, ANKARA.

26
 Ajith Abraham, “Artificial Neural Networks”, Stillwater,OK, USA, 2005.

 Lippmann, R.P., 1987. An introduction to computing with neural nets. IEEE Accost.

 Speech Signal Process. Mag., April: 4-22.

 Carlos Gershenson, “Artificial Neural Networks for Beginners”, United Kingdom.

27

You might also like