0% found this document useful (0 votes)
3 views

MT 072

The document discusses a project presented at the RAMMP 2010 conference, focusing on the use of artificial neural networks to predict the mechanical properties of an aluminum alloy during cold forging. The research aims to optimize hardness and yield strength by controlling process variables such as stress, strain rate, and temperature, utilizing experimental data for training the neural network. Results indicate a high accuracy of predictions, with an error margin of about 1%, suggesting the potential for broader applications in material processing.

Uploaded by

vignesh.sankaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

MT 072

The document discusses a project presented at the RAMMP 2010 conference, focusing on the use of artificial neural networks to predict the mechanical properties of an aluminum alloy during cold forging. The research aims to optimize hardness and yield strength by controlling process variables such as stress, strain rate, and temperature, utilizing experimental data for training the neural network. Results indicate a high accuracy of predictions, with an error margin of about 1%, suggesting the potential for broader applications in material processing.

Uploaded by

vignesh.sankaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

CSIR sponsored National Conference on

“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)


Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering

MCE Council of Scientific and Industrial Research


RAMMP 2010 Sponsored
072 NATIONAL CONFERENCE ON “RECENT ADVANCES IN MECHANICAL AND
MATERIALS PROCESSING” - (RAMMP - 2010)
th
13 March 2010

ANALYSIS OF COLD FORGING OF ALUMINIUM ALLOY


USING ARTIFICIAL NEUTRAL NETWORK
II nd M.E Manufacturing Engineering D.Raj Kumar, MIET Engineering College, Trichy
Email: [email protected]

ABSTRACT

In many products the mechanical properties depend upon precise control of


deformation, temperature, and strain rate during processing which is required to develop
the optimum structure and mechanical properties, particularly in the aerospace industry
where high material and process cost are involved. In cold forging the hardness, tensile
strength and microstructure develops statically upon annealing. In this project an artificial
neural network has been modeled which predicts the resultant mechanical properties of
an aluminium alloy, depending upon working process variables. The ultimate goal is
predicting the hardness and yield strength when the process control variables are given as
inputs. The process control variables are stress, strain rate, strain and the temperature.
The methodology used for achieving the ultimate goal are collection of the experimental
data for aluminium 4043 alloy and using the experimental data neural network is to be
trained by back propagation algorithm.

PROBLEM FORMULATION AND METHODOLOGY


PROBLEM FORMULATION

If a material undergoes a certain strength using the experimental data.


process, at the final stage of the process Inputs given to the neural network are
we have to get good hardness and stress, strain, strain rate and temperature
required yield strength. For obtaining and the outputs from the neural network
that hardness and yield strength, the are hardness and yield strength
process control variables should be
preciously controlled and monitored. PROBLEM SOLVING
The process control variables are stress, METHODOLOGY
strain, strain rate and temperature. Experimental values of the aluminium
Initially we may not know for obtaining 4043 alloy are used to train the neural
a particular hardness and yield strength network. The training of the neural
how much the process control variables network is one of the important parts of
should be controlled. The ultimate goal this project work. During training, the
is to develop a neural network for neural network works as a back
predicting the hardness and yield propagation network and is then used for
97
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering

interpolation of the required data’s. After hidden layer) and a non-linear transfer
training it is used to predict the required function can implement any function.
hardness and yield strength. The error ARCHITECTURE OF THE
percentage is calculated based on the
BACPROPAGATION NETWORK
graphs of ANN and the experimental
results. Based on the error percentage The back propagation neural network
the ANN is again trained by changing architecture is a hierarchical design
the learning rate parameter in order to consisting of fully interconnected layers
reach the maximum accuracy. or rows of processing units. In general,
the architecture consists of rows of
processing units, numbered from the
SOLUTION PROCEDURE
bottom up beginning with 1. The first
ANN SELECTION layer consist of m processing elements
that simply accept the individual
It is not an easy process to
components of the input vector and
select the network from numerous
distribute them, without modification, to
models of neural networks. Here, Back
all of the units of the second row. Each
Propagation Network (BPN) is chosen.
unit on each row receives the output
Since the architecture of Back
signal of each of the units of the row
Propagation Network is simple and the
below. This continues through all of the
response is so quick for nonlinear
rows of the network until the final row.
problems. The single mode Back
The final row of the network consists of
Propagation network is shown in the fig
k units and produces the network’s
4.1 below.
estimate of the correct output vector.
Each unit of each hidden row receives an
“error feedback” connection from each
of the units above it.

BPN Architecture
Back propagation learning
rule is used to train nonlinear multi
layered networks to perform function
approximation. This quality of this
network is so useful in modeling non-
linear problems. A back propagation
network is a feed forward network,
typically consisting of an input layer,
one or more hidden layers, and an output
layer all containing varied numbers of
neurons. It has been arithmetically
proven that this network with at least
three layers (input, output and one
Working model of BPN

98
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering

4.2 Training of the Neural Network


There are several critical factors when
Back propagation works by using the
formulating and training a back
theory of least squares - calculating the
propagation network:
derivatives of error with respect to the
1. Number of Input Neurons.
connection weights, and adjusting the
2. Number of Output Neurons.
weights based on steepest error surface
3. Number of Hidden Layers.
descent. Weights are modified during
4. Number of Neurons in Each Hidden
training until they reach a stable state
Layer.
(convergence); this translates to the
5. Size and Composition of Training Set.
network achieving a state of error
6. Size and Composition of Test Set for
minimization. Since weights are usually
Trained Network Validation.
adjusted after each vector input instead
7. Training Rate and Modifications
of after the whole training set, the
resultant descent is not the steepest, but
The numbers of input and output
closely approximates it Neural network
neurons are usually dictated by the
was trained by the experimental values
problem (e.g. a network categorizing
of aluminium 4043 alloy, a cylindrical
features, each described by an input
specimen of height 37.5mm and
vector of length 40, into 4 categories
diameter 25mm is taken for the study.
might have 40 input neurons and 4
The experimental values are shown in
output neurons). Hidden layers can
the table 4.2.1 and 4.2.2. The Table 4.2.1
provide translation and scaling
shows yield strength and hardness for
invariance, i.e. the positioning and size
different temperature and strain rate.
of the feature would not critical to its
And table 4.2.2 shows flow stress for
identification. In applications, the choice
different temperature and strain rate.
of number of hidden layers is generally
Here the strain is kept constant and the
confined to one or two. Once the number
value is 0.4. Chemical composition of
of hidden layers is selected the size of
aluminium 4043 alloy is shown in table
each layer can be estimated. The
4.2.3.The size and composition of the
objective is to find the least number of
training set are critical. It should be a
hidden neurons which can achieve
sample with fixed probability
adequate performance. Decreasing
distribution chosen randomly from the
number of hidden neurons increases
expected values, which also have a fixed
generalization and is more
probability distribution. The training set
computationally efficient, but impairs
is presented to the network many times
the network's ability to learn. Over
during training. It should be in random
fitting with too many hidden neurons
order and contain all the training
tends to decrease generalization because
features. Upper and lower bounds for
of training set "memorization", and is
number of training pairs have been
computationally inefficient. Estimating
directly related to numbers of neurons
optimum hidden neurons is usually done
and weights, and rate of classification
by trial and error.
error. It has also been shown that as
training set size increases, so does
probability of correct network pattern
classification.

99
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering

Table 4.2.1 Experimental values of graph is drawn between temperature and


yield strength for different strain rate.
hardness and yield strength

− − − At strain rate 0.02/s


Temp ε = 0.02 s -1
ε = 0.2 s -1
ε =8s -1

YS HV YS HV YS HV Temperature Vs Hardness
300 166.5738 70 127.0395 73 - 73
80
350 159.3144 72 112.6188 70 134.397 71 70

Hardness (VHN)
60
400 135.6723 72 - 70 123.3117 70 50
40
450 170.3016 70 76.4199 68 157.6467 70 30
20
500 122.8212 63 108.4 63 124.4889 64
10
550 85.1729 55 69.0624 50 107.91 56
0
600 77.1066 46 73.3788 46 93.6855 47 0 100 200 300 400 500 600 700
Temperature (K)
650 73.6731 50 88.7805 44 46.8918 43
Experimental Values ANN Output
700 112.9131 53 84.366 48 67.1004 59

750 109.872 71 91.7235 63 118.8972 65


Temperature versus Hardness for
800 134.9856 72 102.8088 70 - 70
strain rate 0.02/s
Chemical composition of Aluminium 4043 alloy
Temperature Vs Hardness
Si Fe Cu Mn Mg Zn Ti Others Al
100
% % % % % % % % %
Hardness (VHN)

80
4.5 60
Remai
- 0.8 0.3 0.05 0.05 0.1 0.2 0.05
nder 40
6
20
0
0 100 200 300 400 500 600 700
A problem that is significant Temperature (K)
with back propagation for feature Experimental Values ANN Output
identification is that once a network is
trained, new patterns cannot easily be Temperature versus Hardness for
added to its internal model. It is usually strain rate 0.2/s At strain rate 8/s
best to build and train a new network to
incorporate the new feature(s). Temperature Vs Hardness

80
70
Hardness (VHN)

RESULTS 60
50
40
Results of the neural network 30
20
are compared with the experimental data 10
0
for validation of the neural network. 0 100 200 300 400 500 600 700
Temperature (K)
Two types of graph are drawn; in the
first type, the graph is drawn between Experimental Values ANN Output

temperature and hardness for different Temperature versus Hardness for strain
strain rate and compared with the rate 8/s
experimental data. In the second type, At strain rate 0.02/s

100
CSIR sponsored National Conference on
“RECENT ADVANCES IN MECHANICAL AND MATERIALS PROCESSING” - (RAMMP - 2010)
Organized by - Dept. of Mechanical Engineering, Mookambigai College of Engineering

Tempetature Vs Yield Strength largely automated through use of CAD


200
modelers, adding intelligence in the form
of knowledge based systems and neural
Yield Strength

150
(N/sq.mm)

networks is synergistic. The designer


100
need not search a catalog of applicable
50
features and attributes, and apply them
0
0 100 200 300 400 500 600 700 consistently. The system can do those
Temperature (K) tasks. Designers can also be more
Experimental Values ANN Output productive in both quantity and quality
Temperature versus Yield strength for by being assisted in applying the proper
strain rate 0.02/s design rules to ensure functionality and
manufacturability.
Temperature Vs Yield Strength

140
120
Yield Strength

100
(N/sq.mm)

80
60
40
20
0
0 100 200 300 400 500 600 700
Temperature (K)

Experimental Values ANN Output

Temperature versus Yield strength for


strain rate 0. 2/s
At strain rate 8/s
Temperature Vs Yield Strength

200
Yield Strength

150
(N/sq.mm)

100

50

0
0 100 200 300 400 500 600 700
Temperature (K)

Experimental Values ANN Output

Temperature versus Yield strength for


strain rate 8/s

CONCLUSION
The graph indicates the error between
outputs from the neural network and the
experimental data are very less of about
1 percent. So, we can rely on the
artificial neural network for predicting
the hardness and yield strength. Future
research can be extended by developing
the software for all materials. Since
manufacturing design is becoming
101

You might also like