Multilayer Backpropagation Neural Networks For Implementation of Logic Gates
Multilayer Backpropagation Neural Networks For Implementation of Logic Gates
1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
Multilayer Backpropagation Neural Networks
for Implementation of Logic Gates
Abstract. ANN is a computational model that is composed of several processing elements (neu-
rons) that tries to solve a specific problem. Like the human brain, it provides the ability to learn
from experiences without being explicitly programmed. This article is based on the implementa-
tion of artificial neural networks for logic gates. At first, the 3 layers Artificial Neural Network is
designed with 2 input neurons, 2 hidden neurons & 1 output neuron. after that model is trained
by using a backpropagation algorithm until the model satisfies the predefined error criteria (e)
which set 0.01 in this experiment. The learning rate (α) used for this experiment was 0.01. The
NN model produces correct output at iteration (p)= 20000 for AND, NAND & NOR gate. For
OR & XOR the correct output is predicted at iteration (p)=15000 & 80000 respectively.
Keywords: Machine Learning, Artificial Neural Network, Back propagation, Logic Gates.
1 Introduction
DOI: null 7
DOI: 10.5121/ijcses.2021.12101 1
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
2021
the weight of its connections between the units until the difference between the
actual and desired outcome produces the lowest possible error. The structure of a
typical Artificial Neural Network is given in Fig. 1.1.
by the associated weight (w ,w ,......,w ) and then sums up all the results i.e.
computes a weighted sum of the input signals as:
n
X
X= x i wi (1.1)
i=1
8
2
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
Then, the result is then passed through a non-linear function(f ) called activation
function. An activation function is the function that describes the output behavior
of a neuron[9], and has a threshold value 0 θ0 . Then the result is compared with a
threshold value which gives the output as either ‘00 or ‘10 . If the weighted sum is
less than 0 θ0 then neuron output is ’1’ otherwise 0. In general, the neuron uses step
function (1.2) as activation functions.
Y ={+1,if X>θ
0,if X<θ (1.2)
Step 2: Activations:
Activate the networks by applying all sets of possible inputs x1 (p), x2 (p),
x3 (p), ......xn (p) & desired outputs yd1 (p), yd2 (p), yd3 (p), ......, ydn (p).
i. Calculate the actual outputs in hidden layer as:
Xn
yj (p) = sigmoid[ xi (p) ∗ wij (p) − θj ] (1.3)
i=1
where,
n = number of inputs of neuron j in hidden layer and Activation function used
here is sigmoid function defined as:
1 (1.4)
Y sigmoid =
1 + e−X
9
3
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
m
X
yk (p) = sigmoid[ xj (p) ∗ wjk (p)−k ] (1.5)
j=1
Where,
m = number of inputs of neuron k in output layer.
l
X
δj (p) = yj (p) ∗ [1 − yj (p)] ∗ δk (p) ∗ wjk (p) (1.9)
k=1
Step 4:
Increase the value of p(iteration) by 1, go to Step 2 and repeat this process
until predefined error criteria is fulfilled.
10
4
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
2 Methodology
In this article, we have used a multilayer neural network with two input neurons,
two hidden neurons, and one output neurons as shown in Fig. 2.1. We are doing so
because all the logic gates: AND, OR, NAND, NOR, and XOR being implemented
here have two inputs signals and one output signals with signal values being either
’1’ or ’0’. Here W1 3, W14 , W23 , W24 are weights associated between neurons of
input layer & hidden layer and W35 , W45 are weights associated between hidden
layer and output layer. biases: b3 , b4 & b5 are values associated with each node in
the intermediate (hidden) and output layers of the network, are treated in the same
manner as other weights.
Fig. 2.1: Multi-layer back-propagation neural networks with two input neurons and
single output neuron
To train the multi-layer neural network designed here we have used the back-
propagation algorithm[5] describes in section(1.3). The training of the NN model
takes place in two steps. In the first step of training, input signals were presented
to the input layer of the network. Then network propagates the signals from layer
to layer until the output is generated by the output layer. If the output is differ-
ent from the desired output then an error is calculated and propagated backward
from the output layer to the input layer and the weights are modified accordingly.
This process is repeated until a predefined criteria (sum of squared error)is fulfilled.
11
5
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
In this experiment we have set error criteria =’0.01’. Initial weights and bias values
of the networks are set randomly in between small range as described in section
1.3. The cost function or learning curve and final results for all the logic gates
are presented in respective figures and tables. Among other hyper-parameters, the
learning rate(α) to train the NN model is set 0 0.010 for all logic gates. The training
of the NN model was started from the iteration(p)= 5000 & final output is observed
to satisfy the error criteria. If the NN model doesn’t satisfy the error (predict the
correct output), then the value of p is incremented by 5000 on the next run. This
process is continued until the Network predicts the correct output. From the figures
and tables in the section 3. we can conclude that given the random bias and weights
values inside an small range, OR gate satisfies the threshold (error criteria=0.01)
at ’20000’ epoch, i.e. at ’20000’ epoch the sum of square error (0.008) is less than
threshold (0.01). similarly the model satisfies the error threshold value (0.01) for
OR gate at ’15000’ epoch, with sum of squared error equals to ’0.005’, for NAND
gate at ’20000’ epoch with sum of squared error equals to ’0.01’, for NOR gate
at ’20000’ epoch with sum of squared error equals to ’0.008’ and for XOR gate at
’80000’ epoch with sum of squared error equals to ’0.003’. All the other experimental
results are given in more details in the respective tables.
12
6
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
(a) Final learning curve(AND gate) at itera- (b) Final learning curve(OR gate) at itera-
tion (p)=20000 tion (p)=15000
(c) Final learning curve(NAND gate) at iter- (d) Final learning curve(NOR gate) at itera-
ation (p)=20000 tion (p)=20000
Fig. 3.1: The Final learning curve for AND, OR, NAND, NOR and Exclusive
OR(XOR) gates at which the model satisfies the error criteria (0.01) is shown
in above figures (a), (b),(c),(d),(e) respectively.
13
7
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
14
8
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
15
9
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
16
10
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
17
11
International Journal of Computer Science and Engineering Survey (IJCSES), Vol.12, No.1, February 2021
International Journal of Computer Science and Engineering Survey (IJCSES) Vol.11, No.3, Feb
2021
4 Conclusions
In this article, the Multi-layer artificial neural network for logic gates is implemented
successfully by using the Backpropagation algorithm. The NN model implemented
here satisfies error criteria (0.01) at learning rate(α=0.01), iterations(p)=20000 for
logic gates: AND, NAND, NOR and for OR & XOR gate predicted correct output
at p=15000 & p=80000 respectively.
References
[1] Adam Coates et al. “Text detection and character recognition in scene images
with unsupervised feature learning”. In: Document Analysis and Recognition
(ICDAR), 2011 International Conference on. IEEE. 2011, pp. 440–445.
[2] JAKE FRANKENFIELD. Artificial Neural Network. 2020. url: https : / /
www.investopedia.com/terms/a/artificial-neural-networks-ann.asp
(visited on 07/07/2020).
[3] Mohamad H Hassoun et al. Fundamentals of artificial neural networks. MIT
press, 1995.
[4] John H Holland. “Genetic algorithms”. In: Scientific american 267.1 (1992),
pp. 66–73.
[5] Yann LeCun et al. “Backpropagation applied to handwritten zip code recog-
nition”. In: Neural computation 1.4 (1989), pp. 541–551.
[6] Christopher Manning and Hinrich Schutze. Foundations of statistical natural
language processing. MIT press, 1999.
[7] Dabbala Rajagopal Reddy. “Speech recognition by machine: A review”. In:
Proceedings of the IEEE 64.4 (1976), pp. 501–531.
[8] Tamás Varga, Daniel Kilchhofer, and Horst Bunke. “Template-based synthetic
handwriting generation for the training of recognition systems”. In: Proceed-
ings of the 12th Conference of the International Graphonomics Society. 2005,
pp. 206–211.
[9] Bill Wilson. The Machine Learning Dictionary. 2020. url: http://www.cse.
unsw.edu.au/billw/mldict.html/activnfn (visited on 07/01/2020).
18
12