MT 072
MT 072
ABSTRACT
interpolation of the required data’s. After hidden layer) and a non-linear transfer
training it is used to predict the required function can implement any function.
hardness and yield strength. The error ARCHITECTURE OF THE
percentage is calculated based on the
BACPROPAGATION NETWORK
graphs of ANN and the experimental
results. Based on the error percentage The back propagation neural network
the ANN is again trained by changing architecture is a hierarchical design
the learning rate parameter in order to consisting of fully interconnected layers
reach the maximum accuracy. or rows of processing units. In general,
the architecture consists of rows of
processing units, numbered from the
SOLUTION PROCEDURE
bottom up beginning with 1. The first
ANN SELECTION layer consist of m processing elements
that simply accept the individual
It is not an easy process to
components of the input vector and
select the network from numerous
distribute them, without modification, to
models of neural networks. Here, Back
all of the units of the second row. Each
Propagation Network (BPN) is chosen.
unit on each row receives the output
Since the architecture of Back
signal of each of the units of the row
Propagation Network is simple and the
below. This continues through all of the
response is so quick for nonlinear
rows of the network until the final row.
problems. The single mode Back
The final row of the network consists of
Propagation network is shown in the fig
k units and produces the network’s
4.1 below.
estimate of the correct output vector.
Each unit of each hidden row receives an
“error feedback” connection from each
of the units above it.
BPN Architecture
Back propagation learning
rule is used to train nonlinear multi
layered networks to perform function
approximation. This quality of this
network is so useful in modeling non-
linear problems. A back propagation
network is a feed forward network,
typically consisting of an input layer,
one or more hidden layers, and an output
layer all containing varied numbers of
neurons. It has been arithmetically
proven that this network with at least
three layers (input, output and one
Working model of BPN
98
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering
99
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering
YS HV YS HV YS HV Temperature Vs Hardness
300 166.5738 70 127.0395 73 - 73
80
350 159.3144 72 112.6188 70 134.397 71 70
Hardness (VHN)
60
400 135.6723 72 - 70 123.3117 70 50
40
450 170.3016 70 76.4199 68 157.6467 70 30
20
500 122.8212 63 108.4 63 124.4889 64
10
550 85.1729 55 69.0624 50 107.91 56
0
600 77.1066 46 73.3788 46 93.6855 47 0 100 200 300 400 500 600 700
Temperature (K)
650 73.6731 50 88.7805 44 46.8918 43
Experimental Values ANN Output
700 112.9131 53 84.366 48 67.1004 59
80
4.5 60
Remai
- 0.8 0.3 0.05 0.05 0.1 0.2 0.05
nder 40
6
20
0
0 100 200 300 400 500 600 700
A problem that is significant Temperature (K)
with back propagation for feature Experimental Values ANN Output
identification is that once a network is
trained, new patterns cannot easily be Temperature versus Hardness for
added to its internal model. It is usually strain rate 0.2/s At strain rate 8/s
best to build and train a new network to
incorporate the new feature(s). Temperature Vs Hardness
80
70
Hardness (VHN)
RESULTS 60
50
40
Results of the neural network 30
20
are compared with the experimental data 10
0
for validation of the neural network. 0 100 200 300 400 500 600 700
Temperature (K)
Two types of graph are drawn; in the
first type, the graph is drawn between Experimental Values ANN Output
temperature and hardness for different Temperature versus Hardness for strain
strain rate and compared with the rate 8/s
experimental data. In the second type, At strain rate 0.02/s
100
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering
150
(N/sq.mm)
140
120
Yield Strength
100
(N/sq.mm)
80
60
40
20
0
0 100 200 300 400 500 600 700
Temperature (K)
200
Yield Strength
150
(N/sq.mm)
100
50
0
0 100 200 300 400 500 600 700
Temperature (K)
CONCLUSION
The graph indicates the error between
outputs from the neural network and the
experimental data are very less of about
1 percent. So, we can rely on the
artificial neural network for predicting
the hardness and yield strength. Future
research can be extended by developing
the software for all materials. Since
manufacturing design is becoming
101