SlideShare a Scribd company logo
Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119
www.ijera.com 115 | P a g e
Review: “Implementation of Feedforward and Feedback Neural
Network for Signal Processing Using Analog VLSI Technology”
Miss. Rachana R. Patil*, Mr. Dinkar L. Bhombe**
*(PG Student, Department of Electronics & Telecommunication, SSGMCE Shegaon, Sant Gadge Baba
Amravati University, Amravati, India)
** (Department of Electronics & Telecommunication, SSGMCE Shegaon, Sant Gadge Baba Amravati
University, Amravati, India)
ABSTRACT
Main focus of project is on implementation of Neural Network Architecture (NNA) with on chip learning on
Analog VLSI Technology for signal processing application. In the proposed paper the analog components like
Gilbert Cell Multiplier (GCM), Neuron Activation Function (NAF) are used to implement artificial NNA.
Analog components used comprises of multiplier, adder and tan sigmoidal function circuit using MOS transistor.
This Neural Architecture is trained using Back Propagation (BP) Algorithm in analog domain with new
techniques of weight storage. Layout design and verification of above design is carried out using VLSI Backend
Microwind 3.1 software Tool. The technology used to design layout is 32 nm CMOS Technology.
Keywords – Analog VLSI Technology, Back Propagation Algorithm, Gilbert Cell Multiplier, Neural Network
Architecture, Neuron Activation Function
I. Introduction
Artificial Neural Networks are used to derive
meaning from complex and imprecise data and it is
also used in signal processing application. Therefore
the main focus is on implementation of Feedforward
and Feedback Neural Network using a very advance
32 nm CMOS Technology and the VLSI Backend
Microwind 3.1 software Tool. This will be more
useful than the previous technologies as time of
execution, area and power requirement of circuit will
be reduced and efficiency of system will increase
due to use of 32 nm CMOS technology.
1.1 Neural Network
In this neural network we used a neuron, this
neuron itself is a simple processing unit which has
an associated weight for each input to strengthen it
and produces an output. The working of neuron is to
add together all the inputs and calculating an output
to be passed on. The neural architecture is trained
using back propagation algorithm and also it is a
feed forward network. The designed neuron is
suitable for both analog and digital applications. The
proposed neural architecture is capable of
performing operations like sine wave learning,
amplification and frequency multiplication and can
also be used for analog signal processing activities.
Figure can be expressed mathematically as,
a = f (P1W1+P2W2+P3W3+Bias)
Fig.1 Neural Network
When a set of single layer neurons are
connected with each other it forms a multiple layer
neurons, as shown in the figure. Fig.2 shows that
weights w11 to w16 are used to connect the inputs
v1 and v2 to the neuron in the hidden layer. Then
weights w21 to w23 transferred the output of hidden
layer to the output layer. The final output is a21.
The inputs to the neuron v1 and v2 as shown in
Fig. 2 are multiplied by the weight matrix. The
resultant output is summed up and passed through an
NAF.
The output of the activation function is then
passes to the next layer for further processing.
Blocks to be used are Multiplier block, Adders, NAF
block with derivative.
Fig.2 Layered structure of Neural Network
RESEARCH ARTICLE OPEN ACCESS
Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119
www.ijera.com 116 | P a g e
II. Design Architecture of Basic
Components
2.1 Multiplier Block and Adder Block
Gilbert cell is used as multiplier and adders. It is
only component which act as both adder and
multiplier. The output of the Gilbert cell is in the
form of current. The node of the Gilbert cell
connecting the respective outputs act as adder itself.
The schematic and layout is drawn using VLSI
Backend Microwind 3.1 Software Tool and
technology used to draw layout is 32 nm CMOS
technology. Simulation Results are also drawn by
VLSI Backend Microwind 3.1 Software Tool.
Fig.3 Schematic of Gilbert Cell
Fig.4 Layout of Gilbert Cell
Fig.5 Simulation Results Of Gilbert Cell
2.2 Neuron Activation Function (NAF)
The designed activation function is tan sigmoid.
The proposed design is actually a differential
amplifier modified for differential output. Same
circuit is capable of producing output of the
activation function and the differentiation of the
activation function.
Here two designs are considered for NAF
1. Differential amplifier as NAF
2. Modified differential amplifier as NAF with
differentiation output.
The schematic and layout is drawn using VLSI
Backend Microwind 3.1 Software Tool and
technology used to draw layout is 32 nm CMOS
technology. Simulation Results are also drawn by
VLSI Backend Microwind 3.1 Software Tool.
Fig.6 Schematic of NAF
Fig.7 Layout of NAF
Fig. 8 Simulation Results of NAF
Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119
www.ijera.com 117 | P a g e
III. Implementation of Neural
Architecture using Analog Blocks
Fig.9 Realisation of Neural Architecture using
Analog Blocks
Fig.9 shows exactaly how the neural
architecture of fig.2 is implemented using analog
components. The input layer is the input to the 2:3:1
neuron.The hidden layer is connected to the input
layer by weights in the first layer named as w1i. The
output layer is connected to input layer through
weights w2j. The op is the output of 2:3:1 neuron.
IV. Expected Layout of Feedforward
Neural Network
Fig.10 Expected Layout of Feedforward Neural
Network
V. Expected Layout of Feedback Neural
Network
Fig.11 Expected Layout of Feedback Neural
Network
Layout in Fig.10 and Fig.11 is taken using
Tanner EDA 14.1 tool from reference paper [2]. In
this project we will use VLSI Backend Microwind
3.1 software Tool and 32 nm CMOS Technology to
draw the layout of both Feedforward and Feedback
Neural Network.
VI. Expected Combined Architecture
Fig.12 Expected combined architecture
VII. Expected Combined Layout
Fig.13 Expected combined layout
Layout in Fig.13 is taken using Tanner EDA
14.1 tool from reference paper [2]. In this project we
will use VLSI Backend Microwind 3.1 software
Tool to draw the layout of both Feedforward and
Feedback Neural Network with 32 nm CMOS
Technology.
VIII. Back Propagation Algorithm
Backpropagation method is the most common
method of training of artificial neural network so as
to minimize objective function. It is the
generalization of delta rule and mainly used for
feedforward network. The Backpropagation is
understand by dividing it into two phases. The first
phase is the propagation and second is weight
update.
8.1 Propagation
a) Forward propagation of training pattern input
through the neural network in order to generate
the propagation output activation.
b) Backword propagation of the propagations
output activations through the neural network
using the training patterns target in order to
generate the deltas of all outputs and the hidden
neurons.
8.2 Weight Update
a) For each weight synapse it multiply its output
delta and the input activation to get the gradient
of the weight.
Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119
www.ijera.com 118 | P a g e
b) Bring the weight in opposite direction of the
gradient by subtracting a ratio of it from the
weight.
The ratio influences the speed and the quality of
learning and it is called learning rate. The sign of the
gradient of weight indicates where the error is
increasing, that is why the weight must be updated in
the opposite direction. This phases goes on repeating
until the performance is of the network is of
satisfactory.
Fig.14 Notation for three-layered network
There are (n + 1) × k weights between input
sites and hidden units and (k +1)×m between hidden
and output units. Let W1 denote the (n+1)×k matrix
with component wij (1)
at the i-th row and the j-th
column. Similarly let W2 denote the (k + 1) × m
matrix with components wij(2)
ij. We use an over
lined notation to emphasize that the last row of both
matrices corresponds to the biases of the computing
units. The matrix of weights without this last row
will be needed in the back propagation step. The n-
dimensional input vector = (o1, . . . , on) is extended,
transforming it to o = (o1, . . . , on, 1). The excitation
net j of the j-th hidden unit is given by:
The activation function is a sigmoid and the
output Oj(1)
of this unit is thus
After choosing the weights of the network
randomly, the back propagation algorithm is used to
compute the necessary corrections. The algorithm
can be decomposed in the following four steps:
(i) Feed-forward computation
(ii) Back propagation to the output layer
(iii) Back propagation to the hidden layer
(iv) Weight updates
The algorithm is stopped when the value of the
error function has become sufficiently small.
IX. Expected Results
As shown in Fig.12, analog inputs v1 and v2
should recover back exactly at the last stage of
proposed system. After successful implementation of
this architecture, it can also be used for signal
processing application. The developed ANN should
reduce the power consumption and area of circuit
and should increase efficiency.
X. CONCLUSION
Neural network has remarkable ability to derive
meaning from complicated or imprecise data and can
be used to extract patterns and to detect trends that
are too complex to be noticed by either humans or
other computer techniques. Due to its adaptive
learning, self-organization, real time operations and
fault tolerance via redundant information coding
properties it can be used in Modeling and
Diagnosing the Cardiovascular System and in
Electronic noses which has several potential
applications in telemedicine. Another application
developed was Instant Physician which represents
the best diagnosis and treatment.
REFERENCES
Journal Papers:
[1] Pashanki B. Malwankar, Associate
Professor Pritesh R. Gumble, Design
Methodology of Neural Network for Signal
Processing, INTERNATIONAL JOURNAL
OF INNOVATIVE RESEARCH IN
ELECTRICAL, ELECTRONICS,
INSTRUMENTATION AND CONTROL
ENGINEERING Vol. 2, Issue 2, February
2014.
[2] Neeraj Chasta, Sarita Chouhan and Yogesh
Kumar, Analog VLSI Implementation of
Neural Network Architecture for Signal
Processing, International Journal of VLSI
design & Communication Systems
(VLSICS) Vol.3, No.2, April 2012.
[3] Anne-Johan Annema, Member, IEEE,
Bram Nauta, Senior Member, IEEE, Ronald
van Langevelde, Member, IEEE,and Hans
Tuinhout, Analog Circuits in Ultra-Deep-
Submicron CMOS, IEEE JOURNAL OF
SOLID-STATE CIRCUITS, VOL. 40, NO. 1,
JANUARY 2005.
[4] Wai-Chi Fang, Member, IEEE, Bing J.
Sheu, Senior Member, IEEE, Oscal T.-C.
Chen, Student Member, IEEE, and Joongho
Choi, Student Member, IEEE, A VLSI
Neural Processor for Image Data
Compression Using Self-organization
Networks, IEEE TRANSACTIONS ON
NEURAL NETWORKS, VOL. 3, NO. 3,
MAY 1992
Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119
www.ijera.com 119 | P a g e
[5] JABRI M. PICKARD S. LEONG P.
RIGBY G. JIANG J. FLOWER B.
HENDERSON P., VLSI
IMPLEMENTATION OF NEURAL
NETWORKS WITH APPLICATION TO
SIGNAL PROCESSING, CH 3006-
4/91/0000- 1275 $1.00 0 IEEE.
[6] L. D. Jackel, B. Boser, H. P. Graf, J. S.
Denker, Y. Le Cun, D. Henderson, 0.
Matan, R. E. Howard AT&T Bell
Laboratories, Holmdel NJ 07733, and H. S.
Baird, AT&T Bell Laboratories, Murray
Hill NJ 07974, VLSI IMPLEMENTATIONS
OF ELECTRONIC NEURAL NETWORKS:
AN EXAMPLE IN CHARACTER
RECOGNITION,
90CH29306/90/000(M32$01.(Xl 0 1990
IEEL.
[7] ERIC A. VITTOZ, MEMBER, IEEE, The
Design of High-Performance Analog
Circuits on Digital CMOS Chips, IEEE
JOURNAL OF SOLID-STATE
CIRCUITS,VOL. SC-20,NO.3, JUNE1985.
Proceedings Papers:
[8] Lanny L. Lewyn, Life Senior Member
IEEE, Trond Ytterdal, Senior Member
IEEE, Carsten Wulff, Member IEEE, and
Kenneth Martin, Fellow IEEE, Analog
Circuit Design in Nanoscale CMOS
Technologies, Proceedings of the IEEE |
Vol. 97, No. 10, October 2009.
[9] R. Dogaru, A.T. Murgan, S. Ortmann, M.
Glesner, A Modified RBF Neural Network
for Efficient Current-Mode VLSI
Implementation, 1086-1947/96 $5.00 0
1996 IEEE Proceedings of MicroNeuro
’96.
[10] K.VenkataRamanaiah, Cyril Prasanna Raj,
VLSI Architecuture for Neural Network
Based Image Compression, Third
International Conference on Emerging
Trends in Engineering and Technology.

More Related Content

PDF
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
IJEEE
 
PDF
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
ijistjournal
 
PDF
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
ijsrd.com
 
PDF
G010334554
IOSR Journals
 
PDF
Hybrid PSO-SA algorithm for training a Neural Network for Classification
IJCSEA Journal
 
PDF
Digital Implementation of Artificial Neural Network for Function Approximatio...
IOSR Journals
 
PDF
Analog VLSI Implementation of Neural Network Architecture for Signal Processing
VLSICS Design
 
PDF
Median based parallel steering kernel regression for image reconstruction
csandit
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
IJEEE
 
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
ijistjournal
 
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
ijsrd.com
 
G010334554
IOSR Journals
 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
IJCSEA Journal
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
IOSR Journals
 
Analog VLSI Implementation of Neural Network Architecture for Signal Processing
VLSICS Design
 
Median based parallel steering kernel regression for image reconstruction
csandit
 

What's hot (17)

PDF
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
csandit
 
PDF
Ft3111351140
IJERA Editor
 
PDF
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ijac123
 
PDF
Electricity Demand Forecasting Using Fuzzy-Neural Network
Naren Chandra Kattla
 
PDF
Electricity Demand Forecasting Using ANN
Naren Chandra Kattla
 
PDF
Intoduction to Neural Network
Dr. Sanjay Shitole
 
PDF
Optimal neural network models for wind speed prediction
IAEME Publication
 
PDF
Classification by back propagation, multi layered feed forward neural network...
bihira aggrey
 
PDF
Modeling of neural image compression using gradient decent technology
theijes
 
PDF
Expert system design for elastic scattering neutrons optical model using bpnn
ijcsa
 
PDF
Adaptive modified backpropagation algorithm based on differential errors
IJCSEA Journal
 
PDF
R094108112
IOSR Journals
 
PDF
Classification of Electroencephalograph (EEG) Signals Using Quantum Neural Ne...
CSCJournals
 
PDF
1.meena tushir finalpaper-1-12
Alexander Decker
 
PPT
Artificial Neural Network Based Object Recognizing Robot
Jaison Sabu
 
PDF
Black-box modeling of nonlinear system using evolutionary neural NARX model
IJECEIAES
 
PDF
Radial Basis Function
Madhawa Gunasekara
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
csandit
 
Ft3111351140
IJERA Editor
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ijac123
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Naren Chandra Kattla
 
Electricity Demand Forecasting Using ANN
Naren Chandra Kattla
 
Intoduction to Neural Network
Dr. Sanjay Shitole
 
Optimal neural network models for wind speed prediction
IAEME Publication
 
Classification by back propagation, multi layered feed forward neural network...
bihira aggrey
 
Modeling of neural image compression using gradient decent technology
theijes
 
Expert system design for elastic scattering neutrons optical model using bpnn
ijcsa
 
Adaptive modified backpropagation algorithm based on differential errors
IJCSEA Journal
 
R094108112
IOSR Journals
 
Classification of Electroencephalograph (EEG) Signals Using Quantum Neural Ne...
CSCJournals
 
1.meena tushir finalpaper-1-12
Alexander Decker
 
Artificial Neural Network Based Object Recognizing Robot
Jaison Sabu
 
Black-box modeling of nonlinear system using evolutionary neural NARX model
IJECEIAES
 
Radial Basis Function
Madhawa Gunasekara
 
Ad

Similar to Review: “Implementation of Feedforward and Feedback Neural Network for Signal Processing Using Analog VLSI Technology” (20)

PDF
Neural Network Implementation Control Mobile Robot
IRJET Journal
 
PDF
Digital Implementation of Artificial Neural Network for Function Approximatio...
IOSR Journals
 
PPTX
Neural network
KRISH na TimeTraveller
 
PDF
Deep Learning Survey
Anthony Parziale
 
PDF
A Study On Deep Learning
Abdelrahman Hosny
 
PDF
Implementation of Feed Forward Neural Network for Classification by Education...
ijsrd.com
 
PPT
Artificial neural network
mustafa aadel
 
PPTX
employed to cover the tampering traces of a tampered image. Image tampering
rapellisrikanth
 
PDF
ANN-lecture9
Laila Fatehy
 
PPTX
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 
PDF
Implementing Neural Networks Using VLSI for Image Processing (compression)
IJERA Editor
 
PPT
SET-02_SOCS_ESE-DEC23__B.Tech%20(CSE-H+NH)-AIML_5_CSAI300
dhruvkeshav123
 
PDF
Neural Network
Ashish Kumar
 
PPT
Intro to Deep learning - Autoencoders
Akash Goel
 
PPTX
Multilayer Perceptron Neural Network MLP
Abdullah al Mamun
 
PPTX
Unit ii supervised ii
Indira Priyadarsini
 
PDF
Fuzzy Logic Final Report
Shikhar Agarwal
 
PDF
071bct537 lab4
shailesh kandel
 
PDF
B42010712
IJERA Editor
 
Neural Network Implementation Control Mobile Robot
IRJET Journal
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
IOSR Journals
 
Neural network
KRISH na TimeTraveller
 
Deep Learning Survey
Anthony Parziale
 
A Study On Deep Learning
Abdelrahman Hosny
 
Implementation of Feed Forward Neural Network for Classification by Education...
ijsrd.com
 
Artificial neural network
mustafa aadel
 
employed to cover the tampering traces of a tampered image. Image tampering
rapellisrikanth
 
ANN-lecture9
Laila Fatehy
 
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 
Implementing Neural Networks Using VLSI for Image Processing (compression)
IJERA Editor
 
SET-02_SOCS_ESE-DEC23__B.Tech%20(CSE-H+NH)-AIML_5_CSAI300
dhruvkeshav123
 
Neural Network
Ashish Kumar
 
Intro to Deep learning - Autoencoders
Akash Goel
 
Multilayer Perceptron Neural Network MLP
Abdullah al Mamun
 
Unit ii supervised ii
Indira Priyadarsini
 
Fuzzy Logic Final Report
Shikhar Agarwal
 
071bct537 lab4
shailesh kandel
 
B42010712
IJERA Editor
 
Ad

Recently uploaded (20)

PDF
Oracle AI Vector Search- Getting Started and what's new in 2025- AIOUG Yatra ...
Sandesh Rao
 
PDF
How-Cloud-Computing-Impacts-Businesses-in-2025-and-Beyond.pdf
Artjoker Software Development Company
 
PDF
Event Presentation Google Cloud Next Extended 2025
minhtrietgect
 
PDF
Why Your AI & Cybersecurity Hiring Still Misses the Mark in 2025
Virtual Employee Pvt. Ltd.
 
PDF
DevOps & Developer Experience Summer BBQ
AUGNYC
 
PDF
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
PDF
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
PPTX
How to Build a Scalable Micro-Investing Platform in 2025 - A Founder’s Guide ...
Third Rock Techkno
 
PPTX
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
PDF
madgavkar20181017ppt McKinsey Presentation.pdf
georgschmitzdoerner
 
PDF
CIFDAQ'S Market Insight: BTC to ETH money in motion
CIFDAQ
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PDF
CIFDAQ's Teaching Thursday: Moving Averages Made Simple
CIFDAQ
 
PDF
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
PDF
Automating ArcGIS Content Discovery with FME: A Real World Use Case
Safe Software
 
PPTX
Smart Infrastructure and Automation through IoT Sensors
Rejig Digital
 
PDF
REPORT: Heating appliances market in Poland 2024
SPIUG
 
DOCX
Top AI API Alternatives to OpenAI: A Side-by-Side Breakdown
vilush
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Oracle AI Vector Search- Getting Started and what's new in 2025- AIOUG Yatra ...
Sandesh Rao
 
How-Cloud-Computing-Impacts-Businesses-in-2025-and-Beyond.pdf
Artjoker Software Development Company
 
Event Presentation Google Cloud Next Extended 2025
minhtrietgect
 
Why Your AI & Cybersecurity Hiring Still Misses the Mark in 2025
Virtual Employee Pvt. Ltd.
 
DevOps & Developer Experience Summer BBQ
AUGNYC
 
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
How to Build a Scalable Micro-Investing Platform in 2025 - A Founder’s Guide ...
Third Rock Techkno
 
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
madgavkar20181017ppt McKinsey Presentation.pdf
georgschmitzdoerner
 
CIFDAQ'S Market Insight: BTC to ETH money in motion
CIFDAQ
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
CIFDAQ's Teaching Thursday: Moving Averages Made Simple
CIFDAQ
 
Orbitly Pitch Deck|A Mission-Driven Platform for Side Project Collaboration (...
zz41354899
 
Automating ArcGIS Content Discovery with FME: A Real World Use Case
Safe Software
 
Smart Infrastructure and Automation through IoT Sensors
Rejig Digital
 
REPORT: Heating appliances market in Poland 2024
SPIUG
 
Top AI API Alternatives to OpenAI: A Side-by-Side Breakdown
vilush
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 

Review: “Implementation of Feedforward and Feedback Neural Network for Signal Processing Using Analog VLSI Technology”

  • 1. Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119 www.ijera.com 115 | P a g e Review: “Implementation of Feedforward and Feedback Neural Network for Signal Processing Using Analog VLSI Technology” Miss. Rachana R. Patil*, Mr. Dinkar L. Bhombe** *(PG Student, Department of Electronics & Telecommunication, SSGMCE Shegaon, Sant Gadge Baba Amravati University, Amravati, India) ** (Department of Electronics & Telecommunication, SSGMCE Shegaon, Sant Gadge Baba Amravati University, Amravati, India) ABSTRACT Main focus of project is on implementation of Neural Network Architecture (NNA) with on chip learning on Analog VLSI Technology for signal processing application. In the proposed paper the analog components like Gilbert Cell Multiplier (GCM), Neuron Activation Function (NAF) are used to implement artificial NNA. Analog components used comprises of multiplier, adder and tan sigmoidal function circuit using MOS transistor. This Neural Architecture is trained using Back Propagation (BP) Algorithm in analog domain with new techniques of weight storage. Layout design and verification of above design is carried out using VLSI Backend Microwind 3.1 software Tool. The technology used to design layout is 32 nm CMOS Technology. Keywords – Analog VLSI Technology, Back Propagation Algorithm, Gilbert Cell Multiplier, Neural Network Architecture, Neuron Activation Function I. Introduction Artificial Neural Networks are used to derive meaning from complex and imprecise data and it is also used in signal processing application. Therefore the main focus is on implementation of Feedforward and Feedback Neural Network using a very advance 32 nm CMOS Technology and the VLSI Backend Microwind 3.1 software Tool. This will be more useful than the previous technologies as time of execution, area and power requirement of circuit will be reduced and efficiency of system will increase due to use of 32 nm CMOS technology. 1.1 Neural Network In this neural network we used a neuron, this neuron itself is a simple processing unit which has an associated weight for each input to strengthen it and produces an output. The working of neuron is to add together all the inputs and calculating an output to be passed on. The neural architecture is trained using back propagation algorithm and also it is a feed forward network. The designed neuron is suitable for both analog and digital applications. The proposed neural architecture is capable of performing operations like sine wave learning, amplification and frequency multiplication and can also be used for analog signal processing activities. Figure can be expressed mathematically as, a = f (P1W1+P2W2+P3W3+Bias) Fig.1 Neural Network When a set of single layer neurons are connected with each other it forms a multiple layer neurons, as shown in the figure. Fig.2 shows that weights w11 to w16 are used to connect the inputs v1 and v2 to the neuron in the hidden layer. Then weights w21 to w23 transferred the output of hidden layer to the output layer. The final output is a21. The inputs to the neuron v1 and v2 as shown in Fig. 2 are multiplied by the weight matrix. The resultant output is summed up and passed through an NAF. The output of the activation function is then passes to the next layer for further processing. Blocks to be used are Multiplier block, Adders, NAF block with derivative. Fig.2 Layered structure of Neural Network RESEARCH ARTICLE OPEN ACCESS
  • 2. Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119 www.ijera.com 116 | P a g e II. Design Architecture of Basic Components 2.1 Multiplier Block and Adder Block Gilbert cell is used as multiplier and adders. It is only component which act as both adder and multiplier. The output of the Gilbert cell is in the form of current. The node of the Gilbert cell connecting the respective outputs act as adder itself. The schematic and layout is drawn using VLSI Backend Microwind 3.1 Software Tool and technology used to draw layout is 32 nm CMOS technology. Simulation Results are also drawn by VLSI Backend Microwind 3.1 Software Tool. Fig.3 Schematic of Gilbert Cell Fig.4 Layout of Gilbert Cell Fig.5 Simulation Results Of Gilbert Cell 2.2 Neuron Activation Function (NAF) The designed activation function is tan sigmoid. The proposed design is actually a differential amplifier modified for differential output. Same circuit is capable of producing output of the activation function and the differentiation of the activation function. Here two designs are considered for NAF 1. Differential amplifier as NAF 2. Modified differential amplifier as NAF with differentiation output. The schematic and layout is drawn using VLSI Backend Microwind 3.1 Software Tool and technology used to draw layout is 32 nm CMOS technology. Simulation Results are also drawn by VLSI Backend Microwind 3.1 Software Tool. Fig.6 Schematic of NAF Fig.7 Layout of NAF Fig. 8 Simulation Results of NAF
  • 3. Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119 www.ijera.com 117 | P a g e III. Implementation of Neural Architecture using Analog Blocks Fig.9 Realisation of Neural Architecture using Analog Blocks Fig.9 shows exactaly how the neural architecture of fig.2 is implemented using analog components. The input layer is the input to the 2:3:1 neuron.The hidden layer is connected to the input layer by weights in the first layer named as w1i. The output layer is connected to input layer through weights w2j. The op is the output of 2:3:1 neuron. IV. Expected Layout of Feedforward Neural Network Fig.10 Expected Layout of Feedforward Neural Network V. Expected Layout of Feedback Neural Network Fig.11 Expected Layout of Feedback Neural Network Layout in Fig.10 and Fig.11 is taken using Tanner EDA 14.1 tool from reference paper [2]. In this project we will use VLSI Backend Microwind 3.1 software Tool and 32 nm CMOS Technology to draw the layout of both Feedforward and Feedback Neural Network. VI. Expected Combined Architecture Fig.12 Expected combined architecture VII. Expected Combined Layout Fig.13 Expected combined layout Layout in Fig.13 is taken using Tanner EDA 14.1 tool from reference paper [2]. In this project we will use VLSI Backend Microwind 3.1 software Tool to draw the layout of both Feedforward and Feedback Neural Network with 32 nm CMOS Technology. VIII. Back Propagation Algorithm Backpropagation method is the most common method of training of artificial neural network so as to minimize objective function. It is the generalization of delta rule and mainly used for feedforward network. The Backpropagation is understand by dividing it into two phases. The first phase is the propagation and second is weight update. 8.1 Propagation a) Forward propagation of training pattern input through the neural network in order to generate the propagation output activation. b) Backword propagation of the propagations output activations through the neural network using the training patterns target in order to generate the deltas of all outputs and the hidden neurons. 8.2 Weight Update a) For each weight synapse it multiply its output delta and the input activation to get the gradient of the weight.
  • 4. Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119 www.ijera.com 118 | P a g e b) Bring the weight in opposite direction of the gradient by subtracting a ratio of it from the weight. The ratio influences the speed and the quality of learning and it is called learning rate. The sign of the gradient of weight indicates where the error is increasing, that is why the weight must be updated in the opposite direction. This phases goes on repeating until the performance is of the network is of satisfactory. Fig.14 Notation for three-layered network There are (n + 1) × k weights between input sites and hidden units and (k +1)×m between hidden and output units. Let W1 denote the (n+1)×k matrix with component wij (1) at the i-th row and the j-th column. Similarly let W2 denote the (k + 1) × m matrix with components wij(2) ij. We use an over lined notation to emphasize that the last row of both matrices corresponds to the biases of the computing units. The matrix of weights without this last row will be needed in the back propagation step. The n- dimensional input vector = (o1, . . . , on) is extended, transforming it to o = (o1, . . . , on, 1). The excitation net j of the j-th hidden unit is given by: The activation function is a sigmoid and the output Oj(1) of this unit is thus After choosing the weights of the network randomly, the back propagation algorithm is used to compute the necessary corrections. The algorithm can be decomposed in the following four steps: (i) Feed-forward computation (ii) Back propagation to the output layer (iii) Back propagation to the hidden layer (iv) Weight updates The algorithm is stopped when the value of the error function has become sufficiently small. IX. Expected Results As shown in Fig.12, analog inputs v1 and v2 should recover back exactly at the last stage of proposed system. After successful implementation of this architecture, it can also be used for signal processing application. The developed ANN should reduce the power consumption and area of circuit and should increase efficiency. X. CONCLUSION Neural network has remarkable ability to derive meaning from complicated or imprecise data and can be used to extract patterns and to detect trends that are too complex to be noticed by either humans or other computer techniques. Due to its adaptive learning, self-organization, real time operations and fault tolerance via redundant information coding properties it can be used in Modeling and Diagnosing the Cardiovascular System and in Electronic noses which has several potential applications in telemedicine. Another application developed was Instant Physician which represents the best diagnosis and treatment. REFERENCES Journal Papers: [1] Pashanki B. Malwankar, Associate Professor Pritesh R. Gumble, Design Methodology of Neural Network for Signal Processing, INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN ELECTRICAL, ELECTRONICS, INSTRUMENTATION AND CONTROL ENGINEERING Vol. 2, Issue 2, February 2014. [2] Neeraj Chasta, Sarita Chouhan and Yogesh Kumar, Analog VLSI Implementation of Neural Network Architecture for Signal Processing, International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012. [3] Anne-Johan Annema, Member, IEEE, Bram Nauta, Senior Member, IEEE, Ronald van Langevelde, Member, IEEE,and Hans Tuinhout, Analog Circuits in Ultra-Deep- Submicron CMOS, IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 40, NO. 1, JANUARY 2005. [4] Wai-Chi Fang, Member, IEEE, Bing J. Sheu, Senior Member, IEEE, Oscal T.-C. Chen, Student Member, IEEE, and Joongho Choi, Student Member, IEEE, A VLSI Neural Processor for Image Data Compression Using Self-organization Networks, IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 3, MAY 1992
  • 5. Miss. Rachana R. Patil Int. Journal of Engineering Research and Applications www.ijera.com ISSN : 2248-9622, Vol. 5, Issue 1( Part 5), January 2015, pp.115-119 www.ijera.com 119 | P a g e [5] JABRI M. PICKARD S. LEONG P. RIGBY G. JIANG J. FLOWER B. HENDERSON P., VLSI IMPLEMENTATION OF NEURAL NETWORKS WITH APPLICATION TO SIGNAL PROCESSING, CH 3006- 4/91/0000- 1275 $1.00 0 IEEE. [6] L. D. Jackel, B. Boser, H. P. Graf, J. S. Denker, Y. Le Cun, D. Henderson, 0. Matan, R. E. Howard AT&T Bell Laboratories, Holmdel NJ 07733, and H. S. Baird, AT&T Bell Laboratories, Murray Hill NJ 07974, VLSI IMPLEMENTATIONS OF ELECTRONIC NEURAL NETWORKS: AN EXAMPLE IN CHARACTER RECOGNITION, 90CH29306/90/000(M32$01.(Xl 0 1990 IEEL. [7] ERIC A. VITTOZ, MEMBER, IEEE, The Design of High-Performance Analog Circuits on Digital CMOS Chips, IEEE JOURNAL OF SOLID-STATE CIRCUITS,VOL. SC-20,NO.3, JUNE1985. Proceedings Papers: [8] Lanny L. Lewyn, Life Senior Member IEEE, Trond Ytterdal, Senior Member IEEE, Carsten Wulff, Member IEEE, and Kenneth Martin, Fellow IEEE, Analog Circuit Design in Nanoscale CMOS Technologies, Proceedings of the IEEE | Vol. 97, No. 10, October 2009. [9] R. Dogaru, A.T. Murgan, S. Ortmann, M. Glesner, A Modified RBF Neural Network for Efficient Current-Mode VLSI Implementation, 1086-1947/96 $5.00 0 1996 IEEE Proceedings of MicroNeuro ’96. [10] K.VenkataRamanaiah, Cyril Prasanna Raj, VLSI Architecuture for Neural Network Based Image Compression, Third International Conference on Emerging Trends in Engineering and Technology.