Unit 12
Unit 12
LEARNING
Structure
12.1 Introduction
12.2 Objectives
12.3 Overview of Neural Network
12.4 Multilayer Feedforward Neural networks with Sigmoid activation
functions
12.4.1 Neural Networks with Hidden Layers
12.9 Summary
12.10 Solutions/ Answers
12.11 Further Reading
12.1 INTRODUCTION
Jain et al. in 1996 mentioned in their work that a neuron is a unique biological
cell that has the capability of information processing. Figure 1 describes a
biological neuron's structure, consisting of a cell body and tree-like branches
called axons and dendrites. The working of neurons is based on receiving
the signals from other neurons through their dendrites, processing the alerts
through their body, and finally passing the signals to other neurons via its axon.
The synapse is responsible for connecting two neurons through an axon for
the first neuron while the dendrite for the second neuron. A synapse can either
enhance or reduce the learning capabilities' signal value. If the signals exceed
a particular value, called a threshold, then the neuron fires, otherwise not fire.
Table1: Biological Neuron and Artificial Neuron
12.2 OBJECTIVES
After completing this unit, you will be able to:
• Understand the concept of Neural Networks
• Understand Feed forward Neural networks
• Understand Back propagation Algorithm
• Understand the concept of Deep Learning
366
ANN is similar to the biological neural networks as both perform the functions Neural Networks and
collectively and in parallel. Artificial Neural Network (ANN) is a general term Deep Learning
used in various applications, such as weather predictions, pattern recognitions,
recommendation systems, and regression problems.
Figure 2 describes three neurons that perform "AND" logical operations. In
this case, the output neuron will fire if both input neurons are fired. The output
neurons use a threshold value (T), T=3/2 in this case. If none or only one input
neuron is fired, then the total input to the output becomes less than 1.5 and firing
for output is not possible. Take another scenario where both input neurons are
firing, and the total input becomes 1+1=2, which is greater than the threshold
value of 1.5, then output neurons will fire. Similarly, we can perform the "OR”
logical operation with the help of the same architecture but set the new threshold
to 0.5. In this case, the output neurons will be fired if at least one input is fired.
T=3/2
X2 W2
Y=Φ(V)
W3
X3
Figure A: Single unit with three inputs.
The node has three inputs x = (x1, x2, x3) that receive only binary signals
(either 0 or 1). How many different input patterns this node can receive? What if
the node had four inputs? Or Five inputs? Can you give a formula that computes
the number of binary input patterns for a given number of inputs?
Answer - 1: For three inputs the number of combinations of 0 and 1 is 8:
x1 : 0 1 0 1 0 1 0 1
x2 : 0 0 1 1 0 0 1 1
x3 : 0 0 0 0 1 1 1 1
and for four inputs the number of combinations is 16:
x1 : 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
x2 : 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
x3 : 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
x4 : 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
367
Machine Learning - I You may check that for five inputs the number of combinations will be 32. Note
that 8 = 23, 16 = 24 and 32 = 25 (for three, four and five inputs).
Thus, the formula for the number of binary input patterns is: 2n, where n in the
number of inputs.
Check Your Progress 1
Question -1: Below is a diagram if a single artificial neuron (unit):
X1
W1
V Y=Φ(V)
W2
X2
Figure A-1: Single unit with three inputs.
The node has three inputs x = (x1, x2) that receive only binary signals
(either 0 or 1). How many different input patterns this node can receive?
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
368
12.4.1 Neural Networks with Hidden Layers Neural Networks and
Deep Learning
Figure 3 describes the hidden layers of a neural network by adding more neurons
in between the input and output layers. There may be a single hidden layer or
multiple hidden layers.
Output Layer
Input
Layer Input
Output
Layer
Hidden Layer
Now you can understand the exact working how multiple layers work.
Example – 2 Consider the unit shown below.
X1
W1
X2 W2
V Y=Φ(V)
W3
X3
Figure B: Single unit with three inputs.
Suppose that the weights corresponding to the three inputs have the following
values:
w1 = 2 ; w2 = -4 ; w3 = 1
and the activation of the unit is given by the step-function:
Φ(V) = 1 for V>=0 and Φ(V) = 0 Otherwise
Calculate what will be the output value y of the unit for each of the following
input patterns:
Pattern P1 P2 P3 P4
X1 1 0 1 1
X2 0 1 0 1
X3 0 1 1 1
Answer:
To find the output value y for each pattern we have to:
a) Calculate the weighted sum: v = ∑i wi xi = w1 • x1 + w2 • x2 + w3 • x3
b) Apply the activation function to v, the calculations for each input pattern
are:
P1 : v = 2 • 1 − 4 • 0 + 1 • 0 = 2 , (2 > 0) , y = ϕ(2) = 1
P2 : v = 2 • 0 − 4 • 1 + 1 • 1 = −3 , (−3 < 0) , y = ϕ(−3) = 0
370
P3 : v = 2 • 1 − 4 • 0 + 1 • 1 = 3 , (3 > 0) , y = ϕ(3) = 1 Neural Networks and
Deep Learning
P4 : v = 2 • 1 − 4 • 1 + 1 • 1 = −1 , (−1 < 0) , y = ϕ(−1) = 0
Example - 3: Logical operators (i.e. NOT, AND, OR, XOR, etc) are the building
blocks of any computational device. Logical functions return only two possible
values, true or false, based on the truth or false values of their arguments. For
example, operator AND returns true only when all its arguments are true,
otherwise (if any of the arguments is false) it returns false. If we denote truth by
1 and false by 0, then logical function AND can be represented by the following
table:
x1 : 0 0 1 1
x2 : 0 1 0 1
x1 AND x2 : 0 0 0 1
This function can be implemented by a single-unit with two inputs:
X1 W1
V Y=Φ(V)
W2
X3
if the weights are w1 = 1 and w2 = 1 and the activation of the unit is given by
the step-function:
Φ(V) = 1 for V>=2 and Φ(V) = 0 Otherwise
Note that the threshold level is 2 (v ≥ 2).
a) Test how the neural AND function works.
Answer (a):
P1 : v = 1 • 0 + 1 • 0 = 0 , (0 < 2) , y = ϕ(0) = 0
P2 : v = 1 • 1 + 1 • 0 = 1 , (1 < 2) , y = ϕ(1) = 0
P3 : v = 1 • 0 + 1 • 1 = 1 , (1 < 2) , y = ϕ(1) = 0
P4 : v = 1 • 1 + 1 • 1 = 2 , (2 = 2) , y = ϕ(2) = 1
b) Suggest how to change either the weights or the threshold level of this
single unitto implement the logical OR function (true when at least one of
the arguments is true):
x1 : 0 0 1 1
x2 : 0 1 0 1
x1 OR x2 : 0 1 1 1
371
Machine Learning - I Answer(b): One solution is to increase the weights of the unit: w1 = 2 and w2
= 2:
P1 : v = 2 • 0 + 2 • 0 = 0 , (0 < 2) , y = ϕ(0) = 0
P2 : v = 2 • 1 + 2 • 0 = 2 , (2 = 2) , y = ϕ(2) = 1
P3 : v = 2 • 0 + 2 • 1 = 2 , (2 = 2) , y = ϕ(2) = 1
P4 : v = 2 • 1 + 2 • 1 = 4 , (4 > 2) , y = ϕ(4) = 1
Alternatively, we could reduce the threshold to 1:
Φ(V) = 1 for V>=1 and Φ(V) = 0 Otherwise
c) The XOR function (exclusive or) returns true only when one of the
arguments is true and another is false. Otherwise, it returns always false.
This can be represented by the following table:
x1 : 0 0 1 1
x2 : 0 1 0 1
x1 XOR x2 : 0 1 1 0
Do you think it is possible to implement this function using a single unit? A
network of several units?
Answer(c): This is a difficult question, and it puzzled scientists for some time
because it is impossible to implement the XOR function neither by a single
unit nor by a single-layer feed-forward network (single-layer perceptron). This
was known as the XOR problem. The solution was found using a feed-forward
network with a hidden layer. The XOR network uses two hidden nodes and one
output node.
☞ Check Your Progress 2
Question-1 : Consider the unit shown below.
X1
W1
X2 W2
V Y=Φ(V)
W3
X3
Figure B: Single unit with three inputs.
Suppose that the weights corresponding to the three inputs have the following
values:
w1 = 1 ; w2 = -1 ; w3 = 2
and the activation of the unit is given by the step-function:
Φ(V) = 1 for V>=1 and Φ(V) = 0 Otherwise
Calculate what will be the output value y of the unit for each of the following
input patterns:
372
Neural Networks and
Pattern P1 P2 P3 P4
Deep Learning
X1 1 0 1 1
X2 0 1 0 1
X3 0 1 1 1
x1 : 0 0 1 1
x2 : 0 1 0 1
x1 NAND x2 : 1 1 1 0
This function can be implemented by a single unit with two inputs:
X1 W1
V Y=Φ(V)
W2
X3
if the weights are w1 = 1 and w2 = 1 and the activation of the unit is given by
the step-function:
Φ(V) = 1 for V>=2 and Φ(V) = 0 Otherwise
Note that the threshold level is 2 (v ≥ 2).
a) Test how the neural NAND function works.
b) Suggest how to change either the weights or the threshold level of this
single unit in order to implement the logical NOR function (true when at
least one of the arguments is true):
x1 : 0 0 1 1
x2 : 0 1 0 1
x1 NOR x2 : 1 0 0 0
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
373
Machine Learning - I 12.5 SIGMOID NEURONS: AN INTRODUCTION
So far, we have paid attention to a neural network model, how it works and
what is the role of hidden layers. But now, we are required to emphasize on
activation functions and their role in neural networks. The activation function is
a mathematical function that decides the threshold value for a neuron, it may be
linear or nonlinear.The purpose of an activation function is to add non-linearity
to the neural network. If you have a linear activation function, then the number
of hidden layers does matter, and the final output remains a linear combination
of the input data. However, this linearity cannot help solving complex problems
like patterns separated by curves where nonlinear activation is required.
Moreover, the activation function does not have a helpful derivative as its
derivative is 0 everywhere. Therefore, it doesn't work for Backpropagation, a
fundamental and valuable concept in multilayer perceptron.
Now, as we’ve covered the essential concepts, let’s go over the most popular
neural networks activation functions.
Binary Step Function: Binary step function depends on a threshold value
that decides whether a neuron should be activated or not. The input fed to the
activation function is compared to a certain threshold; if the input is greater than
it, then the neuron is activated, else it is deactivated, meaning that its output is
not passed on to the next hidden layer.
t(z)
0 Z
374
The idea of step function/Activation will be clear from this paragraph. For Neural Networks and
example, we have a perceptron with an activation function that isn't very Deep Learning
"stable" as a relationship candidate.
For example, say some person has bipolar issues. One day ( z < 0), s/he behaves
with no responses as s/he is quiet, and on the second day ( z ≥ 0), s/he changes
the mood and becomes very talkative, and speaks non-stop in front of you.
There is s no transition for the spirit, and you don't know the behavior when s/
he will be quiet or talking. In such cases, we have a nonlinear step function that
helps.
So, minor changes in the weight of the input layer of our model may activate
the neuron by flipping from 0 to 1, which impacts the working of the hidden
layer's working, and then the outcome may affect. Therefore, we want a model
that enhances our exiting neural network by adjusting the weights. However, it
is not possible by a linear activation function. If we don't have such activation
functions, this task cannot be accomplished by simply changing the weights.
w+∆w
y+∆y
So, we need to say goodbye to the perceptron model with this linear activation
function.
We are finding a new activation function that accomplishes our task for our
neural network through the sigmoid function. We are changing only one thing:
the activation function, and it meets our recruitments, which are sudden changes
in the mood. Now, we define the learning Function by
m
=Z ∑ w x + bias
i =1
i i
1
Sigmoidal function is: σ(z) =
1 + e− z
375
Machine Learning - I
1
Sigmoed Function =
1 − 𝑙𝑙 −𝑧𝑧
σ(z) The function is called the sigmoid function. First, the value, Z, is computed
then the sigmoid function is applied toZ. However, it looks very abstract or
strange to you how it works. Those who don't have good knowledge of
mathematics need not worry. Figure 7 explains its curve and its derivative.
Here are some observations mentioned:
1. The output of the Sigmoid Function produces the same results as produced
by the linear step function;the output remains between 0 and 1. The curve
marks 0.5 at z=0, for which we can make a straightforward rule that if the
sigmoid neuron's output becomes more than or equal to 0.5, then its output
one; otherwise, output o given for smaller values.
2. The sigmoid function should be continuous. It means that partial derivative,
that is, σ(z) / (1-σ(z)), which is differentiable everywhere on the curve.
3. If z is a significant negative value, then the output is approximately 0; if z
is a significant positive value, the output is given by around 1
The sigmoid activation function introduces non-linearity, which is the essential
part, into our model. The meaning of this non-linearity is that the output is
found out by the dot product of some inputs x (x1, x2, …, xm), weights w (w1,
w2, …, wm) plus bias, and then apply sigmoid function, cannot be represented
linearly. The idea is that the nonlinear activation function allows us to classify
nonlinear decision boundaries in our data.
We use hidden layers in our model by replacing perceptron with sigmoid
activation function neurons. Now, the question arises what the requirement for
hidden layers is? Are these useful? The answer is in yes. Hidden layers help us
handle complex problems that single-layer neurons cannot solve.
Hidden layers twist the problem so that it can rewrite the problem and provide
easy solutions to complex problems, pattern recognition problems. For example,
figure 8 explains a classic textbook problem, recognition of handwritten digits,
376 that can help you understand the workings of hidden layers and how they work.
6043862 Neural Networks and
Deep Learning
Figure 8: Digits in dataset MNIST
The digits in figure 8 is taken from a well-known dataset called MNIST. It has
70,000 examples of numbers that were written by a human. A picture of 28x28
pixels represents every digit. Therefore, this value is 28*28=784 pixels. Every
pixel takes a deal between 0 and 255 (RGB color code). Zero manners the
coloration is white and 255 manners the shade black.
Now, think about that computer that can really "see" a digit like a human see—
the answer is no. Therefore, we need proper training to recognize these digits.
The computer can't understand an image as a human can see. For this purpose,
it can be interpreted to analyze how the pixel numbers are working to represent
an image. Here, we dissect an image into an array defined by 784 numbers as
appearing in each collection [0, 0, 180, …, … 77, 0, 0, 0], and after that, we
need to feed the array into our model.
x1 0
x2 1
Input Layer: 784 28
28 Neurons, each
with value between
0 and 255 2
x3
9
x78
We set up a neural network, figure 9, for the problem mentioned above. It consists
of 784 neurons for input layers with 28x28 pixel values. So, you may consider
a total of 16 hidden neurons and ten output neurons. The ten output neurons
returning in the form of an array will have different values to classify any digit
from 0 to 9. So, for example, if the neural network finds the handwritten number
is a zero, then the output array of [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] would be returned,
the first output of the array would be fired a zero, while rest of neurons at output
layer would be set at 0. Similarly, take another example, If the neural network
gets that the handwritten digit is a 5, then the array sequence would [0, 0, 0, 0,
0, 1, 0, 0, 0, 0] with six digits one while rest of the values will be 0's. Now, you
can easily find out the sequence for any other number.
377
Machine Learning - I ☞ Check Your Progress 3
1. Discuss the utility of Sigmoid function in neural networks. Compare
Sigmoid function with the Binary Step function.
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
Training
Calculate the error
Square
Error
Increase
weight
Decrease
weight
Weight
Figure 11: Error Calculation
379
Machine Learning - I Backpropagation Algorithm:
Initially, initialize the network weights and take small random values for it
do
for every training example, say we are terming as ex
prediction_ouput = neural_net_output_produced (network, ex) //it is used
for forward pass
actual output = teacher output(ex)
compute the Error part by (prediction output-actual output)
compute Delta w{h}} for all the mentioned weights, from the hidden layer
to the output layer // it is used for the backward pass
compute Delta w{i}} for all weights, from input layer to hidden layer //
It is backward pass continue step
need to update network weights accordingly // input layer does not
modify
till all examples are correctly classified or after meeting another stopping
criterion
.15w1 .4w5
.05 i1 h1 o1
.20w2 .45w6 .01
.25w3 .5w7
.3w4
.10 i2 h2 o2
.99
.55w8
b1.35 b2.60
j I
Figure 12: Neural Network Example
O1Output
net 01 = w 5 × out h1 + w 6 × out h 2 + b 2 × 1 → .4 × .5932 + .45 × .5968 + .6 × 1 = 1.1019
1 1
Out 01 = − net 01
→ = 0.7523
1+ l 1 + l −1.1059
Out 02 = 0.7629
We also repeat the process for the output layer neurons. Hidden layer neurons
outputs become the inputs.
1 1
Error for 01 : E01 = ∑ 2 (target-output) 2
=
2
(.1 − 0.7513) 2 = 0.2748
1 1
E total = (target 01 − out 02 ) 2 + (target 02 - out 02 ) 2
2 2
SE total
= − (target 01 − out 01 ) = − (0.01 − 0.7513) = 0.74136
Sout 01
δ out 01
= out o1 (1-out o1) = 0.75136507 (1-0.75136507) = 0.186815602
δ net 01
Now, we check the total changes in O1 concerning weight W5.
δ Etotal
W5+ = W5-n
δ w5
W5+ = .4 − .5 * 0.082167041
382
12.7 FEED FORWARD NETWORKS FOR Neural Networks and
Deep Learning
CLASSIFICATION AND REGRESSION
Feed forward neural network is used for various problems, including
classification , regression, and pattern encoding. In the first case, the web
returns a value called z=f(w,x), which is very close to the target value y. While
in the second case, the target becomes the input itself v(x,y,f(w,x)).To deal with
multi-classification, we can use either of the techniques.
Pattern : P1 P2 P3 P4
Node 1 : 0 1 0 1
Node 2 : 0 0 1 1
Answer: In order to find the output of the network it is necessary to calculate
weighted sums of hidden nodes 3 and 4:
v3 = w13x1 + w23x2 , v4 = w14x1 + w24x2
Then find the outputs from hidden nodes using activation function ϕ:
y3 = ϕ(v3) , y4 = ϕ(v4) .
Use the outputs of the hidden nodes y3 and y4 as the input values to the output
layer (nodes 5 and 6), and find weighted sums of output nodes 5 and 6:
v5 = w35y3 + w45y4 , v6 = w36y3 + w46y4 .
Finally, find the outputs from nodes 5 and 6 (also using ϕ):
y5 = ϕ(v5) , y6 = ϕ(v6) .
The output pattern will be (y5, y6). Perform this calculation for each input
pattern:
P1: Input pattern (0, 0)
v3 = −2 • 0 + 3 • 0 = 0, y3 = ϕ(0) = 1
v4 = 4 • 0 − 1 • 0 = 0, y4 = ϕ(0) = 1
v5 = 1 • 1 − 1 • 1 = 0, y5 = ϕ(0) = 1
v6 = −1 • 1 + 1 • 1 = 0, y6 = ϕ(0) = 1
The output of the network is (1, 1)
P2: Input pattern (1, 0)
v3 = −2 • 1 + 3 • 0 = −2, y3 = ϕ(−2) = 0
v4 = 4 • 1 − 1 • 0 = 4, y4 = ϕ(4) = 1
v5 = 1 • 0 − 1 • 1 = −1, y5 = ϕ(−1) = 0
v6 = −1 • 0 + 1 • 1 = 1, y6 = ϕ(1) = 1
The output of the network is (0, 1).
384
P3: Input pattern (0, 1) Neural Networks and
Deep Learning
v3 = −2 • 0 + 3 • 1 = 3, y3 = ϕ(3) = 1
v4 = 4 • 0 − 1 • 1 = −1, y4 = ϕ(−1) = 0
v5 = 1 • 1 − 1 • 0 = 1, y5 = ϕ(1) = 1
v6 = −1 • 1 + 1 • 0 = −1, y6 = ϕ(−1) = 0
The output of the network is (1, 0).
P4: Input pattern (1, 1)
v3 = −2 • 1 + 3 • 1 = 1, y3 = ϕ(1) = 1
v4 = 4 • 1 − 1 • 1 = 3, y4 = ϕ(3) = 1
v5 = 1 • 1 − 1 • 1 = 0, y5 = ϕ(0) = 1
v6 = −1 • 1 + 1 • 1 = 0, y6 = ϕ(0) = 1
The output of the network is (1, 1).
☞ Check Your Progress 5
Question-1 The following diagram represents a feed-forward neural network
with one hidden layer:
Pattern : P1 P2 P3 P4
Node 1 : 0 1 0 1
Node 2 : 0 0 1 1
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
385
Machine Learning - I 12.8 DEEP LEARNING
Deep learning is a subset of artificial intelligence, commonly called AI, that
tells us the workings of the human brain to process data and patterns defining
for decision making. Deep learning has capable of learning unsupervised
from unstructured data or unlabeled data. Deep learning is further classified
as an AI function that is used to simulate the workings of the human brain in
processing data to detect objects, recognize speech, translate languages, and
make decisions.
12.9 SUMMARY
In this unit we learned about the fundamental concepts of Neural networks,
and various concepts related to area of neural networks and deep learning,
this includes the understanding of the activation function, back propagation
algorithm, feed forward networks and many more. In this unit the concepts
are simplified with the help of the numerical, which will help you map the
theoretical concepts of neural networks with that of their implementation part.
12.10 SOLUTIONS/ANSWERS
☞ Check Your Progress 1
Question -1 : Below is a diagram if a single artificial neuron (unit):
X1
W1
V Y=Φ(V)
W2
X2
The node has three inputs x = (x1, x2) that receive only binary signals (either 0
or 1). How many different input patterns this node can receive?
Solution: Refer to Section 12.3
☞ Check Your Progress 2
Question-1 : Consider the unit shown below.
X1
W1
X2 W2
V Y=Φ(V)
W3
X3
Figure B: Single unit with three inputs. 387
Machine Learning - I Suppose that the weights corresponding to the three inputs have the following
values:
w1 = 1 ; w2 = -1 ; w3 = 2
and the activation of the unit is given by the step-function:
Φ(V) = 1 for V>=1 and Φ(V) = 0 Otherwise
Calculate what will be the output value y of the unit for each of the following
input patterns:
Pattern P1 P2 P3 P4
X1 1 0 1 1
X2 0 1 0 1
X3 0 1 1 1
x1 : 0 0 1 1
x2 : 0 1 0 1
x1 NAND x2 : 1 1 1 0
This function can be implemented by a single unit with two inputs:
X1 W1
V Y=Φ(V)
W2
X3
if the weights are w1 = 1 and w2 = 1 and the activation of the unit is given by
the step-function:
Φ(V) = 1 for V>=2 and Φ(V) = 0 Otherwise
Note that the threshold level is 2 (v ≥ 2).
a) Test how the neural NAND function works.
b) Suggest how to change either the weights or the threshold level of this
single unit in order to implement the logical NOR function (true when at
least one of the arguments is true):
388
Neural Networks and
x1 : 0 0 1 1
Deep Learning
x2 : 0 1 0 1
x1 NOR x2 : 1 0 0 0
Solution: Refer to section 12.4
☞ Check Your Progress 3
Question-1 Discuss the utility of Sigmoid function in neural networks. Compare
Sigmoid function with the Binary Step function.
Solution: Refer to Section 12.5
☞ Check Your Progress 4
Question 1: Write Back Propagation algorithm, and showcase its execution on
a neural network of your choice (make suitable assumptions if any)
Solution: Refer to Section 12.6
☞ Check Your Progress 5
Question-1 The following diagram represents a feed-forward neural network
with one hidden layer:
Pattern : P1 P2 P3 P4
Node 1 : 0 1 0 1
Node 2 : 0 0 1 1
Solution: Refer to Section 12.7
☞ Check Your Progress 6
Question-1 Compare between Deep Learning and Machine Learning
Solution: Refer to Section 12.8
389
Machine Learning - I 12.11 FURTHER READINGS
1) Dr K Uma Rao, “Artificial Intelligence and Neural Networks”, Pearson
Education (January 2011)
2) Tariq Rashid, “Make Your Own Neural Network: A Gentle Journey
Through the Mathematics of Neural Networks, and Making Your Own
Using the Python Computer Language”,
3) Russell J. Stuart, Norvig Peter,” Artificial Intelligence” , Pearson: A Modern
Approach Paperback – January 2015.
4) F. AcarSavaci, Artificial Intelligence and Neural Networks, Springer;
2006th edition (18 July 2006).
5) Vladimir Golovko (Editor), Akira Imada (Editor), “Neural Networks and
Artificial Intelligence: 8th International Conference, ICNNAI 2014”
6) ToshinoriMunakata, “Fundamentals of the New Artificial Intelligence”
Springer; 2nd ed. 2008 edition (February 2008).
390