0% found this document useful (0 votes)
14 views17 pages

Electronics 12 00981

Uploaded by

vpelluru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views17 pages

Electronics 12 00981

Uploaded by

vpelluru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

electronics

Article
Applying a Neural Network to Predict Surface Roughness and
Machining Accuracy in the Milling of SUS304
Ming-Hsu Tsai 1,2, * , Jeng-Nan Lee 1,2 , Hung-Da Tsai 2 , Ming-Jhang Shie 2 , Tai-Lin Hsu 2
and Hung-Shyong Chen 1,2

1 Department of Mechanical Engineering, Cheng Shiu University, Kaohsiung 83347, Taiwan


2 Institute of Mechatronic Engineering, Cheng Shiu University, Kaohsiung 83347, Taiwan
* Correspondence: [email protected]; Tel.: +886-953-161028

Abstract: Surface roughness and machining accuracy are essential indicators of the quality of parts in
milling. With recent advancements in sensor technology and data processing, the cutting force signals
collected during the machining process can be used for the prediction and determination of the
machining quality. Deep-learning-based artificial neural networks (ANNs) can process large sets of
signal data and can make predictions according to the extracted data features. During the final stage
of the milling process of SUS304 stainless steel, we selected the cutting speed, feed per tooth, axial
depth of cut, and radial depth of cut as the experimental parameters to synchronously measure the
cutting force signals with a sensory tool holder. The signals were preprocessed for feature extraction
using a Fourier transform technique. Subsequently, three different ANNs (a deep neural network, a
convolutional neural network, and a long short-term memory network) were applied for training
in order to predict the machining quality under different cutting conditions. Two training methods,
namely whole-data training and training by data classification, were adopted. We compared the
predictive accuracy and efficiency of the training process of these three models based on the same
training data. The training results and the measurements after machining indicated that in predicting
the surface roughness based on the feed per tooth classification, all the models had a percentage error
within 10%. However, the convolutional neural network (CNN) and long short-term memory (LSTM)
models had a percentage error of 20% based on the whole-data training, while that of the deep neural
Citation: Tsai, M.-H.; Lee, J.-N.; Tsai,
network (DNN) model was over 50%. The percentage error for the machining accuracy prediction
H.-D.; Shie, M.-J.; Hsu, T.-L.; Chen,
based on the whole-data training of the DNN and CNN models was below 10%, while that of the
H.-S. Applying a Neural Network to
LSTM model was as large as 20%. However, there was no significant improvement in the results
Predict Surface Roughness and
Machining Accuracy in the Milling of
of the classification training. In all the training processes, the CNN model had the best analytical
SUS304. Electronics 2023, 12, 981. efficiency, followed by the LSTM model. The DNN model performed the worst.
https://doi.org/10.3390/
electronics12040981 Keywords: neural network; sensory tool holder; surface roughness; machining accuracy

Academic Editor: Yue Wu

Received: 19 January 2023


Revised: 10 February 2023 1. Introduction
Accepted: 13 February 2023 Many studies have investigated the machining results of end milling. The feed rate,
Published: 16 February 2023
attributes of the workpiece materials, cutting speed, depth of cut, cutting tools, and machine
rigidity all affect the surface and dimensional accuracy of the parts. However, due to the
complexity of the cutting, the ideal cutting conditions can only be achieved in laboratories
or in theoretical analysis, resulting in difficulty in building an effective prediction model for
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
real-world milling. We must obtain the assumed parameters of the model through many
This article is an open access article
experiments and use various optimization techniques to improve the model in order to set
distributed under the terms and
the cutting conditions [1].
conditions of the Creative Commons ANNs have long been used to optimize cutting processes, such as tool wear monitoring
Attribution (CC BY) license (https:// and surface roughness prediction. Das et al. used the back propagation algorithm for
creativecommons.org/licenses/by/ training the neural network of turning carbide inserts, and the system showed potential
4.0/). for successful tool wear monitoring [2]. Chien et al. developed a predictive model for

Electronics 2023, 12, 981. https://doi.org/10.3390/electronics12040981 https://www.mdpi.com/journal/electronics


Electronics 2023, 12, 981 2 of 17

the machinability of 304 stainless steel with ANNs to predict the surface roughness of the
workpiece, the cutting force, and the tool life. It was shown that the errors of the surface
roughness, the cutting force, and the tool life were 4.4, 5.3, and 4.2%, respectively [3].
Karabulut et al. used ANNs and variance analysis results to predict the surface roughness
values of compacted graphite iron after a face milling process [4]. The results showed a
strong correlation between the lead angle, chip thickness, and surface quality. The surface
roughness values were improved with the increasing lead angle value.
During the early development of this technology, the detection and control of cutting
forces were expected to optimize the milling results. Tsai et al. employed an accelerom-
eter and a proximity sensor in the milling process and collected vibration and rotation
data [5–10]. The spindle speed, feed rate, depth of cut, and vibration average per revolution
(VAPR) were used as input parameters to develop a backpropagation-based artificial neural
network (ANN) model to predict the surface roughness. The proposed ANN model had
a very high accuracy rate (96–99%) in predicting surface roughness. The resulting high
accuracy proved that an ANN can make accurate real-time predictions of surface roughness
during end milling. Alique et al. established a versatile neural network model with a single
hidden layer [6]. Input parameters, such as the feed rate and depth of cut, were applied to
predict the average cutting force under different conditions. The model could be used for
monitoring, adaptive control, and the real-time prediction of surface roughness and cutting
tool vibration. Cus et al. predicted the cutting force of a ball nose cutter using a three-layer
ANN [7]. The cutting speed, feed rate, radial and axial depth of cut, and cutter diameter
were selected as the machining parameters to predict the components of the cutting force
during the milling process, yielding an accuracy rate of ±4%. Kadirgama et al. employed
an ANN to predict the cutting force for milling 618 stainless steel [8]. The cutting speed,
feed rate, axial depth of cut, and radial depth of cut were the input parameters, and the
cutting force was the output. The range of error was approximately 12%. The error of
the prediction was acceptable. According to the literature, through data training, ANN
models can predict the cutting force during milling under different cutting conditions.
Nevertheless, in this stage of development, the models still had limited applications in
these experimental environments and conditions.
In recent years, advancements in sensor technology have improved the transmission
method and data size of signals. Signal data can be captured, recorded, and transmitted
back in real time during the machining process, so that substantial machining data can be
obtained. In addition, big data analysis has become possible because of recent improve-
ments in computing and data storage. Artificial neural algorithms developed using deep
learning can extract features from data. If the machining data retrieved by sensors can be
analyzed using ANNs, effective prediction models can be established.
A wireless sensory tool holder can be applied to machine tools in which the loads must
be dynamically monitored for real-time monitoring and process recording. Ye used the
sensory tool holder system for analysis during the rough machining of turbine blades and
improved the planning process to shorten the processing time [11]. Chen et al. employed
a sensory tool holder to measure the cutting force when milling thin-walled parts [12].
The cutting force was used as the load to determine the elastic deformation of the parts,
and the volume error was offset by the deformation data so as to correct the processing
path. This method successfully increased the machining accuracy and efficiency. Lu et al.
collected signals in the machining process with a wireless sensory tool holder and extracted
their features [13]. The deep forest algorithm was applied to estimate the surface mass.
The accuracy of the monitoring model for the training sets reached 99.54%, and it reached
90.91% in the case of the validation sets. This approach ensured the surface quality and
increased the machining efficiency. The use of wireless sensory tool holders in machining
could be expanded in the future.
With the development of deep learning, various ANN models have been established,
the most common of which are multilayer perceptron (MP), deep neural network (DNN),
convolutional neural network (CNN), recurrent neural network (RNN), and long short-
Electronics 2023, 12, 981 3 of 17

term memory (LSTM). The different connection and transmission methods of the models
produce different analytical results. Accordingly, we input the same machining signals into
different models for analysis to compare their prediction accuracies.
Although ANNs have been widely used to predict the effects of cutting parameters
on machining results, most studies have used the machining conditions as the input. Few
studies have extracted real-time machining signals as the input for ANNs. At the same time,
there are few studies discussing the differences between different models based on cutting
analysis. In the present study, we employed a sensory tool holder to collect cutting force
signals during machining and converted the signals through Fourier transform for feature
processing. Subsequently, the data were input into three different ANNs (DNN, CNN, and
LSTM) for training. The goal of this training was to measure the surface roughness and
dimensional accuracy after machining. After the training was completed, the data not used
for training were used for testing to determine the training effects and prediction error rates
of the models. Finally, by comparing the prediction accuracy and analytical efficiency of
these three models, we aimed to identify a model with a high accuracy (with a percentage
error of prediction below 10%) and the shortest computing time. The identified model can
facilitate real-time surface roughness and machining accuracy prediction.
In the following, the second section introduces the methods and instruments used in
this study, including the experimental operation process and the setting of the mathematical
model. The third chapter presents the data training results and error analysis. The fourth
chapter is the conclusion.

2. Materials and Methods


2.1. Artificial Neural Network (ANN)
Machine learning is an approach used to realize artificial intelligence. By using
algorithms, machine learning can replace the previous methods by discovering rules and
forming judgements after repeated experiments. Deep learning, a branch of machine
learning, was initially a stagnant field due to its insufficient computational resources and
efficiency. With recent improvements in hardware, particularly the emergence of high-
quality graphics processing units and the rise of big data, deep learning has become the
mainstream method of machine learning. An ANN is a type of mathematical, biomimetic
neural network model and is the basis of the current deep learning models. Composed of
artificial neurons, it contains an input layer, hidden layers, and an output layer. Data and
signals can be stored or learned by such models. The calculation of a neuron is conducted
through the functions of addition, subtraction, multiplication, and division. The variables,
activation functions, errors, and weights input into the models are converted into output
values. The most commonly applied activation functions are the Sigmoid function, rectified
function (ReLU), and the hyperbolic tangent function. To construct an ANN, the parameters
are set manually. Users should determine the appropriate number of neurons and layers
in the model according to their requirements and the correct weights through repeated
training. The numerous different neural networks developed up to the present day have
produced satisfactory results in fields such as machine vision, speech recognition, natural
language processing, and biomedicine.
ANN models can use various types of deep learning architectures, including MP,
DNN, CNN, RNN, and LSTM. Different models have been used to predict machining
results and achieve machine adaptive control. Lai et al. proposed a hybrid recurrent neural
network (HRNN) model on the basis of a diagonal recurrent neural network [14]. The
constant force control applied during machining can be used to verify the effectiveness
of the model through simulations and tests. Huang developed a new intelligent neural
fuzzy system to assess surface roughness in an end milling operation [15]. The model
implemented the neural-assisted method to generate the fuzzy IF–THEN rules and obtain
higher accuracy in surface roughness prediction. Huang et al. adopted a holistic local
LSTM model (HLLSTM) to capture data features and retrieved diachronic machining
signals from a triaxial accelerometer for training and testing in order to establish a deep-
system to assess surface roughness in an end milling operation [15]. The model imple-
mented the neural-assisted method to generate the fuzzy IF–THEN rules and obtain
higher accuracy in surface roughness prediction. Huang et al. adopted a holistic local
Electronics 2023, 12, 981 4 of 17
LSTM model (HLLSTM) to capture data features and retrieved diachronic machining sig-
nals from a triaxial accelerometer for training and testing in order to establish a deep-
learning-based tool wear prediction system [16]. The results of the HILSTM model were
learning-based tool wear prediction system [16]. The results of the HILSTM model were
compared with those of a CNN and LSTM model, and the HILSTM model was proven to
compared with those of a CNN and LSTM model, and the HILSTM model was proven to
have a more satisfactory performance. Huang et al. proposed a deep convolutional neural
have a more satisfactory performance. Huang et al. proposed a deep convolutional neural
network (DCNN) based on multi-domain feature fusion to predict tool wear [17]. The per-
network (DCNN) based on multi-domain feature fusion to predict tool wear [17]. The
formance of the prediction method was experimentally validated using a three-flute ball
performance of the prediction method was experimentally validated using a three-flute ball
nose tungsten carbide cutter for dry milling using a high-speed CNC machine tool. Chan
nose tungsten carbide cutter for dry milling using a high-speed CNC machine tool. Chan
et al. also conducted tool wear prediction with an HLLSTM model [18]. The model could
et al. also conducted tool wear prediction with an HLLSTM model [18]. The model could
reduce the average error of the actual tool wear values and accurately predict tool wear.
reduce the average error of the actual tool wear values and accurately predict tool wear.
2.2. Experiment
2.2.Procedure
Experiment Procedure
In this study,Inwethisapplied
study, awe full factorial
applied design
a full to determine
factorial design tothe cutting parameters
determine the cutting parameters
and employed anda employed
five-axis machine
a five-axis formachine
milling.for
The millingThe
milling. machine
millingwas a 5-axis
machine wasmachin-
a 5-axis machining
ing center CT-350,
center manufactured
CT-350, manufactured by Tongtai Inc., Kaohsiung,
by Tongtai Taiwan,Taiwan,
Inc., Kaohsiung, equipped with nu-
equipped with numeric
meric command (Siemens 840Dsl). The axis of the machine is shown in Figure 1.
command (Siemens 840Dsl). The axis of the machine is shown in Figure 1. The workpieces The work-
pieces were were
80 mm80× mm 80 mm × 80× 60mm mm×SUS304
60 mmstainless
SUS304steel hexahedrons.
stainless Table 1 listsTable
steel hexahedrons. the 1 lists the
mechanical properties and chemical composition of SUS304. In the cutting
mechanical properties and chemical composition of SUS304. In the cutting process, process, the the four
four sides insides
the XY plane
in the XY of the of
plane hexahedron were milled
the hexahedron were milledusingusing
the side
the edge, and the
side edge, and the cutting
cutting depth was was
depth in the Z direction.
in the Z direction. TheThe
processing
processing path
pathwas
wasgenerated
generatedusing
usingthetheSie-
Siemens NX, as
mens NX, asdisplayed
displayedininFigure
Figure2 2[19,20].
[19,20].AAØØ 1010mmmm tungsten
tungsten steel
steel end end mill
mill fromfrom
ChinChin
Ming Precision
Ming Precision Tools
Tools Co.Co. Tainan,
Tainan, Taiwan,
Taiwan, waswasusedused
for for
thethesideside milling.
milling. The The specifications
specifications of the tool are
of the tool are presented in Table 2. During the machining process, a sensory
presented in Table 2. During the machining process, a sensory tool holder (Pro-micron tool holder
(Pro-micron GmbH
GmbH & & Co.
Co. KG, Kaufbeuren,
Kaufbeuren, Germany)
Germany) was was used
used to collect the cutting force signals. The
signals. Thespecifications
specificationsofofthe thesensory
sensory tool holder
tool are are
holder presented in Table
presented 3. It could
in Table measure the axial
3. It could
measure thecutting force, cutting
axial cutting torque,torque,
force, cutting and theandbending moment
the bending in the X-Y
moment direction
in the and send data to
X-Y direc-
tion and senda computer wirelessly.wirelessly.
data to a computer We wrote aWe neural
wrote network
a neuralprogram
networkinprogram
Python and then extracted
in Py-
thon and then extracted features from the captured cutting force data. The compiler was which is a
features from the captured cutting force data. The compiler was Colaboratory,
Colaboratory,product
which isfrom
a product
Google from Google Research
Research and is freeand toisuse.
freeFinally,
to use. Finally,
we imported the features into the
we imported
the features program for the model
into the program for thetraining
model and prediction.
training and prediction.

Figure 1. Sensory tool holder and machining center.


Figure 1. Sensory tool holder and machining center.
Electronics 2023, 12, 981 5 of 17

nics 2023, 12, x FOR PEER REVIEW 5 of 16


Table 1. Mechanical properties and chemical composition of SUS304.

Property/Composition Values
Table 1. Mechanical properties and chemical compositionDensity
of SUS304.
(g/cm3 ) 8.0
Property/Composition Poisson’s Ratio Values 0.29
Density (g/cm3) Young’s Modulus (GPa) 8.0
Mechanical 193
Properties
Poisson’s Ratio Ultimate Stress (MPa) 0.29 520
Mechanical Young’s Modulus (GPa)Yield Stress (MPa) 193 205
Properties Ultimate Stress (MPa) 520
Hardness (Rockwell) 70
Yield Stress (MPa) 205
Iron (Fe) 66.0–74.0%
Hardness (Rockwell) 70
Chromium (Cr) 18.0–20.0%
Iron (Fe) 66.0–74.0%
Chromium (Cr) Nickel (Ni) 18.0–20.0% 8.0–10.5
ChemicalNickel (Ni) Carbon (C) 8.0–10.5 ≤0.08%
Chemical Composition
Carbon (C) Silicon (Si) ≤0.08% ≤1.00%
Composition Silicon (Si) Manganese (Mn) ≤1.00% ≤2.00%
Manganese (Mn) Phosphorus (P) ≤2.00% ≤0.045%
Phosphorus (P) ≤0.045%
Sulfur (S) ≤0.03%
Sulfur (S) ≤0.03%

Figure 2. Workpiece and processing path.


Figure 2. Workpiece and processing path.

Table 2. Specification of the tool.


Table 2. Specification of the tool.
Solid Carbide End Mills
Solid Carbide End Mills
Coating TiAlN
Flutes Coating 4 TiAlN
Diameter (mm) Flutes 10 4
Helix angle (degree)
Diameter (mm) 45 10
Helix angle (degree) 45
Table 3. Specification of the sensory tool holder.

Specification Values
To differentiate the surface roughness and dimensional accuracy after processing, we
selected
Measuring the cutting
frequency (Hz) speed, feed per tooth, axial depth2500of cut, and radial depth of cut as the
Maximumfour factors and
allowable set (rpm)
speed three factor levels to conduct a full factorial experiment. The machining
18000
parameters are shown
Operating temperature (°C) in Table 4. A total of 81 tests were
0~50 designed, and each was conducted
twice, resulting in 162 datasets. We conducted side milling on cubic workpieces. Face
Collet size ER20
milling was first applied to the surface. Each workpiece had eight machined surfaces,
Spindle taper HSK63A
including four upper and four lower. After the machining was completed, a measuring
Diameter (mm)
instrument (Hommel-etamic T8000) was employed34 to estimate the surface roughness, as
Total length (mm)
displayed in Figure 3. Each machined surface was 100measured three times to obtain an
average value. A TESA-hite Magna 400 height gauge was used to estimate the machined
To differentiate the surface roughness and dimensional accuracy after processing, we
selected the cutting speed, feed per tooth, axial depth of cut, and radial depth of cut as the
four factors and set three factor levels to conduct a full factorial experiment. The
Electronics 2023, 12, 981 6 of 17

surfaces and the datum surface. The positions of three points were measured to obtain
the mismatch and average values in order to obtain the machining dimension error, as
Electronics 2023, 12, x FOR PEER illustrated in Figure 4.
REVIEW
Electronics 2023, 12, x FOR PEER REVIEW 66 of
of 16
16

Table 3. Specification of the sensory tool holder.

machining
machining parameters are
are shown
Specification
parameters shown in in Table
Table 4.
4. A
A total
total of
of 81
81 tests were
were designed,
testsValues designed, and
and each
each
was conducted
was conducted twice,
twice,
Measuring resulting
resulting
frequency in 162 datasets. We conducted
in 162 datasets. We conducted side
(Hz) side milling
milling on cubic work-
2500 on cubic work-
pieces. Face
Face milling
pieces.Maximum milling was
was first
first applied
applied to the surface. Each workpiece had eight machined
allowable speed (rpm) to the surface. Each workpiece 18,000had eight machined
surfaces,
surfaces, including
including fourfour upper
upper◦and and four
four lower.
lower. After
After the
the machining
machining was was completed,
completed, aa
measuring Operating temperature ( C) 0~50
measuring instrument (Hommel-etamic T8000) was employed to estimate the
instrument (Hommel-etamic T8000) was employed to estimate the surface
surface
roughness, Collet size ER20
roughness, as displayed in Figure 3. Each machined surface was measured three times
as displayed in Figure 3. Each machined surface was measured three times to
to
obtain
obtain an
an average
average value.
Spindle
value. A
taperA TESA-hite
TESA-hite Magna
Magna 400400 height
height gauge
gauge was
was used
HSK63A used to
to estimate
estimate the
the
machined
machined surfaces
surfaces and the
the datum
and (mm)
Diameter datum surface.
surface. The
The positions
positions ofof three
three34points
points were
were measured
measured
to
to obtain
obtain the
the mismatch
mismatch and
and average
average values
values in
in order
order to
to obtain
obtain the
the machining
machining dimension
dimension
Total length (mm) 100
error,
error, as
as illustrated
illustrated in in Figure
Figure 4. 4.
Table 4. Machining parameters.
Table
Table 4.
4. Machining
Machining parameters.
parameters.
Cutting
Cutting Speed
Speed Feed Tooth
per Tooth Axial Depth
Axial Depth of Radial Depth of
Speed Feed
Cutting(m/min) Feed per
per Tooth
(mm/tooth) Axial Depth of
of Cut
Cut (mm)Cut Radial
Radial Depth
Depth of
of Cut
Cut (mm) Cut
(m/min)
(m/min) (mm/tooth)
(mm/tooth) (mm)
(mm) (mm)
(mm)
1 40 0.03
11 40
40 0.03
0.03 55 5 0.05
0.05
0.05
22 2 70
70
70 0.06
0.06
0.06 10
10
10 0.1
0.1
0.1
33 3 100 100
100 0.1 0.1
0.1 20 20
20 0.20.2
0.2

Figure 3.
Figure3. Roughness
3. Roughness measurement.
Roughnessmeasurement.
measurement.
Figure

Datum
Datum plane
plane Measured
Measured value
value

Measured
Measured plane
plane

SUS304

Figure
Figure 4. Dimensional
4. Dimensional
Figure4. measurement.
Dimensionalmeasurement.
measurement.

2.3.
2.3. Signal
Signal Preprocessing
Preprocessing
The
The sensory tool
sensory tool holder
holder could
could collect
collect three
three types
types of
of cutting
cutting force
force signals
signals (i.e.,
(i.e., tension,
tension,
torque,
torque, and the bending moment). We adopted the bending moment as the basis
and the bending moment). We adopted the bending moment as the basis for
for the
the
Electronics 2023, 12, x FOR PEER REVIEW 7 of 16
Electronics 2023, 12, 981 7 of 17

9~12
2.3.NSignal
m after the tool came into contact with the workpiece during the side milling pro-
Preprocessing
Electronics 2023, 12, x FOR PEER REVIEW 7 of 16
cess. After
The sensory left
the tool toolthe
holderworkpiece,
could collectthe bending
three types moment
of cutting decreased.
force signalsThe(i.e.,
sample interval
tension,
of the tooland
torque, holder was 0.0004
the bending s, and the
moment). measuring
We adopted the frequency
bending moment was 2500 Hz.basis
as the During
for ma-
chining,
the side milling evaluation (Figure 5). We observed that the bending moment increased To
2 s signals were captured, and 5000 signals were retrieved for each dataset.
9~12
achieve
to 9~12N mN after
a satisfactory
m afterthethe
tool came
training
tool into
came contact
result,
into withwith
we conducted
contact the workpiece
thefeature
workpiece during
extractionthe side
during beforemilling
the side pro- the
inputting
milling
cess. After
process. the
After tool left
the modelsthe workpiece,
tool left forthe workpiece,the bending moment
thepurpose
bending moment decreased. The
decreased.sample
The interval
sample
data into the ANN training. The of feature extraction was to obtain
of the tool
interval of holder
the tool was
holder 0.0004
was s, and the
0.0004 s, measuring
and the frequency
measuring was 2500
frequency was Hz. During
2500 Hz. ma-
During
essential and meaningful features from the raw data and increase the analytical efficiency.
chining,
machining, 2 s signals were
2 s signalssignals captured,
were captured, and 5000 signals
and 5000 signals were retrieved
were retrieved for each
for from dataset.
each dataset. To
To
Theachieve
bending a moment
satisfactory training were
result, converted
we conducted to a frequency
feature domain
extraction before a time
inputting domain
the
achieve a satisfactory training result, we conducted feature extraction before inputting the
using
data a Fourier transform technique. The bandwidthofoffeature the original signals was 2500 Hz,
data into the ANN
into the ANN models
models for training.
for training. The
The purpose
purpose of feature extraction
extraction waswas to obtain
to obtain
whereas
essentialtheand
essential and
effective bandwidth
meaningful
meaningful features
of
features from
thethe
from the
signals
raw
raw data
after
dataand
fast Fourier
andincrease
increasethe
transform
theanalytical
was 1250 Hz.
analyticalefficiency.
efficiency.
Therefore,
The the number of each dataset was half of the original:
The bending moment signals were converted to a frequency domain from a time domain
bending moment signals were converted to a frequency domain 2500.
from Figure
a time 6 illustrates
domain
using
theusing
idling a Fourier
frequency
a Fourier transform
for a 6000
transform technique.
RPM The
technique. The bandwidth
spindle
bandwidth of
speed.ofAthethe
peakoriginal
original signals
is observable was
signals was 2500
100 Hz,
at2500 Hz. The
Hz,
whereas
other
whereas the
threethe effectivepeaks
recorded
effective bandwidth
bandwidth of
of the
in Figure the6 signals
refer toafter
signals thefast
after Fourier
natural
fast Fourier transform
frequency
transform ofwas
the1250
was Hz.
sensory
1250 Hz. tool
Therefore,
Therefore,
holder. Figure the7 number
the number of
displays ofthe
each
each dataset was
dataset
spectrum was
duringhalfcutting.
half of the
of the original:
original:
The cutting 2500. speed
2500. Figure
Figureis 66 70
illustrates
illustrates
m/min, and
the spindle speed is 2228 RPM. A peak is observable at 37 Hz. In addition toHz.
the idling frequency for a 6000 RPM spindle speed. A peak is observable at 100 theThe
rotation
other three recorded peaks in Figure 6 refer to the natural frequency of the sensory tool
frequencies, the harmonic frequencies also had peak values of 74 Hz, 112 Hz, and 186 Hz.
holder.
holder. Figure 7 displays
displays the
the spectrum
spectrum during
during cutting.
cutting. The
The cutting
cuttingspeed
speedisis70
70 m/min,
m/min, and
the spindle speed is 2228 RPM. A peak is observable at 37 Hz. In addition to the rotation
frequencies, the harmonic
harmonic frequencies
frequencies also
also had
had peak
peakvalues
valuesof
of74
74Hz,
Hz,112
112Hz,
Hz,and
and186
186Hz.
Hz.

Figure 5. The bending moment signals.


Figure 5. The bending moment signals.

0.2
0.2
Amplitude (db)
Amplitude (db)

0.150.15 Spindle
Spindlespeed
speed

0.1 0.1

0.050.05

0 0
0 0 250
250 500
500 750
750 1000
1000 12501250
Frequency (Hz)
Frequency (Hz)
Figure 6. Bending moment spectrum under idling (spindle speed: 6000 rpm).
Figure 6. Bending moment spectrum under idling (spindle speed: 6000 rpm).
Electronics 2023, 12, x FOR PEER REVIEW 8 of 16
Electronics 2023, 12, 981 8 of 17

0.8
Spindle speed
Amplitude (db)

0.6

0.4

0.2

0
0 250 500 750 1000 1250
Frequency (Hz)
Figure 7. Bending moment spectrum under cutting (spindle speed: 2238 rpm).
Figure 7. Bending moment spectrum under cutting (spindle speed: 2238 rpm).
2.4. Modeling Set-Up
2.4. Modeling
BeforeSet-Up
inputting the data into the ANNs for training, the parameters of each ANN
were set. inputting
Before Parametersthe common to allthe
data into theANNs
ANNs for were the stride,
training, thelearning rate, and
parameters batchANN
of each
size. The learning rate affects the number of strides, so that a lower
were set. Parameters common to all the ANNs were the stride, learning rate, and batch learning rate requires
more
size. Thestrides
learning during
rate training.
affects theInnumber
this study, the learning
of strides, rate
so that was setlearning
a lower as 0.00015.
rate The
requires
mean-square error (MSE) was used as the loss function to determine the training result,
more strides during training. In this study, the learning rate was set as 0.00015. The mean-
and the root mean-square error (RMSE) was further used to evaluate the test results. The
square error (MSE) was used as the loss function to determine the training result, and the
training revealed that after a certain number of strides, the loss function and RMSE no
rootlonger
mean-square error (RMSE)
changed significantly. wasobserving
After further used to evaluate of
the convergence thethe
test results.
models, weThe set training
the
revealed
stride tothat
1000after a certain
and the number
batch size as halfof
of strides, thedata.
the training loss function and RMSE no longer
changedInsignificantly. After observing
total, 162 experimental thewere
data points convergence
obtained. ofThethe models,
training we setcould
methods the stride
be to
divided into training on all the data collectively
1000 and the batch size as half of the training data. and training on the classified data in turn.
When training
In total, 162the classified data,
experimental datawepoints
dividedwere
the data into three
obtained. The sets according
training to the three
methods could be
divided into training on all the data collectively and training on the classified dataand
variances of the four factors (i.e., the cutting speed, feed per tooth, axial depth of cut, in turn.
radial depth of cut). Each set had 54 data points. The training result for each set of data was
When training the classified data, we divided the data into three sets according to the
determined using MSE as the loss function, and four data points were randomly selected
three variances of the four factors (i.e., the cutting speed, feed per tooth, axial depth of cut,
to test for accuracy with RMSE. Convergence was achieved after three tests, and the mean
andabsolute
radial depth of cut).
percentage errorEach set had 54 data points. The training result for each set of
was calculated.
data was determined using MSE as the loss function, and four data points were randomly
2.4.1. Convolutional
selected Neuralwith
to test for accuracy Network (CNN)
RMSE. Convergence was achieved after three tests, and
the mean absolute
A CNN percentage
is a type of deeperror wasmodel
learning calculated.
that is mainly used for image recognition.
It can effectively conduct feature identification and learning, as well as data analysis,
minimizing
2.4.1. Convolutionalthe data size. It
Neural comprises
Network an input layer, multiple hidden layers, and an
(CNN)
output layer. The hidden layers are composed of convolutional layers, pooling layers,
A CNN
and is a type of
fully connected deepThe
layers. learning model that
convolutional is mainly
layers used
are mainly for imagefor
responsible recognition.
feature It
canextraction
effectively conduct feature identification and learning, as well as data
and can achieve superior spatial feature learning. The pooling layers filter analysis, mini-
mizing thedata
feature dataandsize. It comprises
retain the essentialan inputtolayer,
features multiple
downsize hidden
the data, layers,
effectively and anthe
reducing output
layer. The hidden
difficulty layers are composed of convolutional layers, pooling layers, and fully
of the training.
connected Thelayers.
input data
Thefor this study were
convolutional obtained
layers are in time series
mainly order. Time
responsible for series
featuredata are
extraction
suitable for storage as a one-dimensional matrix. Thus, for the CNN
and can achieve superior spatial feature learning. The pooling layers filter feature datamodel, we set the
andnumber
retain of
theinput channels
essential to one to
features anddownsize
the input data formateffectively
the data, as a 1-by-2500 matrix.the
reducing In order
difficulty
to choose a reasonable model size, the model contained a convolutional layer and a pooling
of the training.
layer, as presented in Figure 8. The convolutional kernel size was 500, and the stride (step
The input data for this study were obtained in time series order. Time series data are
of each movement of the convolution kernel) of the kernel was 300. The zero padding was
suitable for depth
200. The storage as of
slice a the
one-dimensional
pooling layer was matrix.
3, andThus, for the
the stride CNN
of the model,
depth we set
slice was 2. the
number of input channels to one and the input
Finally, we set the number of output channels to 16. data format as a 1-by-2500 matrix. In order
to choose a reasonable model size, the model contained a convolutional layer and a pool-
ing layer, as presented in Figure 8. The convolutional kernel size was 500, and the stride
(step of each movement of the convolution kernel) of the kernel was 300. The zero padding
was 200. The depth slice of the pooling layer was 3, and the stride of the depth slice was
2. Finally, we set the number of output channels to 16.
number of input channels to one and the input data format as a 1-by-2500 matrix. In orde
to choose a reasonable model size, the model contained a convolutional layer and a pool
ing layer, as presented in Figure 8. The convolutional kernel size was 500, and the stride
Electronics 2023, 12, x FOR PEER REVIEW
(step of each movement of the convolution kernel) of the kernel was 300. The zero padding
Electronics 2023, 12, 981 was 200. The depth slice of the pooling layer was 3, and the stride of the depth
9 of 17slice was
2. Finally, we set the number of output channels to 16.
Figure 8. CNN model.

2.4.2. Deep Neural Network (DNN)


A DNN, as the name implies, is a neural network with dozens or hundreds of
Figure 8. CNN model.
layers. Each layer contains many neurons, and each neuron transmits its weighted
to a neuron
2.4.2. in the
Deep Neural next layer.
Network (DNN)Users must set appropriate parameters according to
ject Arequirements.
DNN, as the name Theimplies,
activation function
is a neural of the
network architecture
with is mainly
dozens or hundreds of used
hid- for n
conversion,
den layers. Each and thecontains
layer loss function is usedand
many neurons, to estimate
each neuron thetransmits
difference between the p
its weighted
and actual
output values.
to a neuron Thenext
in the model
layer.can be must
Users divided into two parts,
set appropriate namely
parameters the forward
according
to the project requirements. The
gation and backpropagation networks. activation function of the architecture is mainly used
for nonlinear conversion, and the loss function is used to estimate the difference between
Since the number of each input dataset was 2500, the number of input laye
the predicted and actual values. The model can be divided into two parts, namely the
DNN model was 2500.
forward-propagation We had to adjust
and backpropagation the number of hidden layers and the nu
networks.
neurons to number
Since the obtain of theeach
desired convergence.
input dataset was 2500,A themodel
numberwith more
of input hidden
layers of the layers
DNN model was 2500. We had to adjust the number of hidden layers
complex and may cause overfitting. To reduce the complexity of the model an and the number
of neurons to obtain the desired convergence. A model with more hidden layers is more
overfitting, the number of hidden layers can be reduced. The configuration of the
complex and may cause overfitting. To reduce the complexity of the model and avoid
layers andthe
overfitting, nodes
numberin the DNNlayers
of hidden model canafter the convergence
be reduced. analysis
The configuration are
of the displayed i
hidden
9. and nodes in the DNN model after the convergence analysis are displayed in Figure 9.
layers

Figure DNN
Figure9. 9. model.
DNN model.

2.4.3.
2.4.3.Long
LongShort-Term Memory
Short-Term (LSTM)(LSTM)
Memory
An LSTM network is a special type of RNN. It was developed to solve the problem
An LSTM
of vanishing network gradients
and exploding is a special typetraining.
during of RNN. It was vanish
Gradients developed to solve the p
and explode
of vanishing
because and exploding
model weights disappear or gradients during training.
become excessively large due toGradients vanish
multiplication duringand exp
cause model weights
backpropagation. By using disappear or becomeanexcessively
a gating mechanism, LSTM network large due
solves toproblem
this multiplication
by using an input gate,
backpropagation. Byforget
usingdoor, and output
a gating gate to enable
mechanism, backpropagation
an LSTM for thethis pro
network solves
identification of time series correlations in the data.
using an input gate, forget door, and output gate to enable backpropagation for t
Our LSTM model had an LSTM layer and an output layer. The number of input nodes
tification
was 500, andof
thetime
numberseries correlations
of nodes in the LSTMinlayer
the data.
was 32. Before the data were input, they
Our LSTM model
were dimensionally hadand
transformed an LSTM layer
rearranged intoand an output
the form requiredlayer.
by theThe
LSTM number
model. of inpu
MSE was used as the loss function, and RMSE was used to evaluate the training
was 500, and the number of nodes in the LSTM layer was 32. Before the data wer results.
they
3. were
Results dimensionally
and Discussion transformed and rearranged into the form required by th
model. MSE was
3.1. Surface Roughness used as the loss function, and RMSE was used to evaluate the
results.
We employed a surface-roughness-measuring instrument, obtained 162 sets of rough-
ness data, and arranged them from small to large, as illustrated in Figure 10. The distribu-
tion graphs ofand
3. Results the roughness
Discussion in terms of the four control factors are shown in Figures 11–14.
The results revealed that the three subcollections of surface roughness data corresponded
3.1.
to theSurface Roughness
three different feed per tooth values. The measured roughness values of the feed per
tooth 0.03
We (mm/t)
employed were within the range of 0.12 to 0.2 (µm), those
a surface-roughness-measuring of the feed per
instrument, tooth
obtained 162
0.06 (mm/t) were within the range of 0.34 to 0.46 (µm), and those of the feed per tooth
roughness data, and arranged them from small to large, as illustrated in Figure
0.1 (mm/t) were within the range of 0.67 to 1.08 (µm). However, the remaining factors had
distribution
no graphs
evident influence of the
on the roughness
surface roughnessin terms of the
distribution, and four control values
the roughness factors are show
were
ures 11–14.
uniform withinThe results
the range revealed
of 0.12 to 1.08that
(µm).the
Thethree subcollections
feed per of surface
tooth was a crucial variableroughn
corresponded to the three different feed per tooth values. The measuredper
that affected the roughness. Therefore, the data were divided according to the feed roughnes
tooth
of theduring
feedthe
perclassification
tooth 0.03training.
(mm/t) The training
were results
within arerange
the displayed in Figure
of 0.12 15. The
to 0.2 (μm), thos
prediction results of these three models obtained based on the classification had a mean
feed per tooth 0.06 (mm/t) were within the range of 0.34 to 0.46 (μm), and those of
error percentage within 10%. However, the results of the whole-data training indicated
per tooth 0.1 (mm/t) were within the range of 0.67 to 1.08 (μm). However, the re
factors had no evident influence on the surface roughness distribution, and the ro
values were uniform within the range of 0.12 to 1.08 (μm). The feed per tooth was
variable that affected the roughness. Therefore, the data were divided accordin
3, 12, x FOR PEER REVIEW 10 of 16

3, 12, x FOR PEER REVIEW 10 of 16

more
Electronics 2023, 12, 981than
10%, the CNN and LSTM models had a minimum percentage error of approx- 10 of 17
imately 20%, and the DNN had a minimum percentage error of 50%. It can be speculated
more than 10%, the CNN and LSTM models had a minimum percentage error of approx-
that if we had classified the data for the feed per tooth in the early stage of data processing,
imately 20%, and the DNN had a minimum percentage error of 50%. It can be speculated
each model could thathave obtained
the mean accurate predictions.
error percentage When
of the prediction whole-data
results of the threeanalysis wasmore than
that if we had classified the data for the feed per tooth in the early stage of datamodels was
processing,
used, only the CNN10%,andthe LSTM
CNN and LSTMhad
models models had a minimum
reasonable accuracy, percentage
and the DNN error of approximately
model had 20%,
each model could have obtained accurate predictions. When whole-data analysis was
and the DNN had a minimum percentage error of 50%. It can be
low accuracy, which is consistent with the results of the literature. In terms of the analyt- speculated that if we had
used, only the CNN and LSTM
classified the models
data for thehad
feedreasonable
per tooth inaccuracy,
the early and the
stage of DNN
data model had
processing, each model
ical efficiency, whether whole-data analysis or classification analysis was applied, the
low accuracy, which
couldishave
consistent
obtainedwith the results
accurate of theWhen
predictions. literature. In terms
whole-data of thewas
analysis analyt-
used, only the
CNN model had the fastest computing speed, followed by the LSTM model. The DNN
ical efficiency, whether
CNN andwhole-data
LSTM models analysis or classification
had reasonable accuracy, andanalysis
the DNN wasmodel
applied, the accuracy,
had low
model required the longest computing time. The model calculation times are shown in
CNN model hadwhich the fastest computing
is consistent speed,
with the resultsfollowed by the LSTM
of the literature. In terms model.
of the The DNNefficiency,
analytical
Figure 16.
model required whether
the longest whole-data analysis
computing time.or classification analysis wastimes
The model calculation applied,
arethe CNNin
shown model had
Figure 16. the fastest computing speed, followed by the LSTM model. The DNN model required the
1.2 longest computing time. The model calculation times are shown in Figure 16.

1.2
Ra(μm)

1
Ra(μm)

0.81
Roughne,

0.8
0.6
Roughne,

0.6
0.4
0.4
Surface

0.2
Surface

0.2
0
0 20 40 60 80 100 120 140 160
0
0 20 40 60 Case number
80 100 120 140 160
Case number
Figure 10. Surface roughness distribution.
Figure 10. Surface roughness distribution.
Figure 10. Surface roughness distribution.
1.2
Ra(μm)

1.2
1
Ra(μm)

0.81
Roughness,

0.8
0.6
Roughness,

0.6
0.4
0.4
0.2
Surface

0.2
0
Surface

0 0.03 0.06 0.09 0.12


0
0 0.03 Feed per tooth
0.06(mm/tooth) 0.09 0.12
Feed
Figure 11. Roughness per tooth
distribution (mm/tooth)
graph in terms of feed per tooth.
Figure 11. Roughness distribution graph in terms of feed per tooth.
Figure 11. Roughness distribution graph in terms of feed per tooth.
, 12, x FOR PEER REVIEW 11 of 16
23, 12, x FOR PEER REVIEW 11 of 16
3, 12, x FOR PEER REVIEW 11 of 16
Electronics 2023, 12, 981 11 of 17
1.2
1.2
1
Ra(μm)
1.2
Ra(μm) 1
0.8
1
Ra(μm)

0.8
Roughness,

0.6
0.8
Roughness,

0.6
0.4
Roughness,

0.6
0.4
0.2
Surface

0.4
0.2
Surface

0
0.2 0 0.05 0.1 0.15 0.2 0.25
Surface

0
Radial0.1
0 0 depth of cut0.15
0.05 (mm) 0.2 0.25
Radial
0 0.1depth of cut
0.05 (mm)
0.15 0.2 0.25
Radial
Figure 12. Roughness distribution graph depth
in terms of cut
of radial (mm)
depth of cut.
Figure 12. Roughness distribution graph in terms of radial depth of cut.
Figure 12. Roughness distribution graph in terms of radial depth of cut.
Figure1.2
12. Roughness distribution graph in terms of radial depth of cut.
Ra(μm)

1.2
1
Ra(μm)

1.2
1
Ra(μm)

0.8
Roughness,

1
0.8
Roughness,

0.6
0.8
Roughness,

0.6
0.4
0.6
Surface

0.4
0.2
Surface

0.4
0.2
Surface

0
0.2
00 5 10 15 20 25
Axial depth
0 0 10 of cut (mm)
5 15 20 25
0
Axial10 5 15
depth of cut (mm) 20 25
Axial
Figure 13. Roughness distribution graph depth
in terms of cut
of axial (mm)
depth of cut.
Figure 13. Roughness distribution graph in terms of axial depth of cut.
Figure 13. Roughness distribution graph in terms of axial depth of cut.
Figure1.2
13. Roughness distribution graph in terms of axial depth of cut.
1.2
Ra(μm)

1
1.2
Ra(μm)

1
0.8
Ra(μm)

1
Roughness,

0.8
0.6
Roughness,

0.8
0.6
Roughness,

0.4
0.6
0.4
Surface

0.2
0.4
Surface

0.2
0
Surface

0.2
00 20 40 60 80 100 120
0 0 Cutting
40 speed
20 60 (m/min) 80 100 120
040Cutting speed
20 60 (m/min) 80 100 120
Figure 14. Roughness distribution graph in terms of cutting speed.
Cutting speed (m/min)
Figure 14. Roughness distribution graph in terms of cutting speed.
Figure 14. Roughness distribution graph in terms of cutting speed.
Figure 14. Roughness distribution graph in terms of cutting speed.
, 12, x FOR PEER REVIEW 12 of 16
3, 12, x FOR PEER REVIEW
Electronics 2023, 12, 981 12 of 16 12 of 17

Figure 15. Percent error


Figureof15.
roughness prediction.
Percent error of roughness prediction.
Figure 15. Percent error of roughness prediction.

Figure 16. Comparison of computing time between models.


Figure 16. Comparison of computing time between models.
Figure 16. Comparison of computing time between models.
3.2. Dimensional Accuracy
3.2. Dimensional Accuracy
3.2. Dimensional AccuracyThe error values of the dimensional accuracy were arranged from small to large, as
The error values of theindimensional
illustrated Figure 17. The accuracy were arranged
results revealed negativefrom small in
deviations to large, as amount
The error values of the dimensional accuracy were arranged from small tothe cutting
large, as
illustrated in Figure 17.
occurred The
duringresults revealed
stainless steel negative
cutting. deviations
That is, residualsin the
were cutting
detected amount
on the parts. The
illustrated in Figure 17. The results revealed negative deviations in the cutting amount
occurred during reason
stainless forsteel cutting.
this is That is,
that stainless residuals
steel were toughness.
has superior detected on the parts.
Plastic The occurs
deformation
occurred during stainless steel cutting. That is, residuals were detected on the parts. The
reason for this is that
on a stainless
large scalesteelduringhasthesuperior
cuttingtoughness. Plastic
process; hence, deformation
cutting the parts to occurs on size is
the finished
reason for this is that stainless steel has superior toughness. Plastic deformation occurs on
a large scale during the cutting process; hence, cutting the parts to the finished size isrequired, a
difficult. To finish cutting, compensation for the errors and additional cutting is
a large scale during the cutting process; hence, cutting the partsTheto the finishedaccuracies
size is
difficult. To finishfinding
cutting, that is consistent
compensation with
forour
theexperimental results.
errors and additional dimensional
cutting is required, for the
difficult. To finish cutting,
different compensation
cutting factors foridentified.
were the errorsHowever,
and additional cutting
unlike the is required,
distribution
a finding that is consistent with our experimental results. The dimensional accuracies of forroughness,
a finding that is evident
consistent subcollection distribution was observed for the different feeds per toothfor
with our experimental results. The dimensional accuracies (Figure 18).
the different cutting factors were identified. However, unlike the distribution of rough-
the different cutting
For thefactors
differentwere identified.
radial depths ofHowever, unlike theerror
cut, the dimensional distribution of rough-
of the minimum radial depth
ness, evident subcollection
of 0.05 mm had distribution
larger values,wasasobserved for the19.different
shown in Figure feeds per
For the different tooth
axial depths of cut,
ness, evident subcollection distribution was observed for the different feeds per tooth
(Figure 18). For the different radial depths of cut, the dimensional error of the minimum
(Figure 18). For the different radial depths of cut, the dimensional error of the minimumas shown
the dimensional error of the maximum axial depth of 20 mm had larger values,
radial depth of 0.05 mm had
in Figure larger values, as shown
speeds in Figure 19. For effect
the different axial
radial depth of 0.05 mm 20. hadThe different
larger cutting
values, as shown had no significant
in Figure on the dimensional
19. For the different axial error
depths of cut, theafterdimensional
removing the error of the
outliers, as maximum axial21.
shown in Figure depth of 20 mm had larger
depths of cut, the dimensional error of the maximum axial depth of 20 mm had larger
values, as shown in Figure 20. The different cutting speeds had no significant effect on the
values, as shown in Figure 20. The different cutting speeds had no significant effect on the
dimensional error after removing the outliers, as shown in Figure 21.
dimensional error after removing the outliers, as shown in Figure 21.
The training results are presented in Figure 22. For the machining accuracy, no obvi-
The training results are presented in Figure 22. For the machining accuracy, no obvi-
ous trend was observed during the training of the classified data. This result was different
ous trend was observed during the training of the classified data. This result was different
from that observed for the surface roughness. The prediction errors of the three models
from that observed for the surface roughness. The prediction errors of the three models
were close, and the values were within the range of 7% to 28%. When whole-data analysis
were close, and the values were within the range of 7% to 28%. When whole-data analysis
was used, the DNN model had the lowest percent error (6.99%), followed by the CNN
was used, the DNN model had the lowest percent error (6.99%), followed by the CNN
presentedin
presented
presented inFigure
in Figure23.
Figure 23.The
23. Theresult
The resultwas
result wasthe
was thesame
the sameas
same asthat
as thatfor
that forthe
for thesurface
the surfaceroughness.
surface roughness.Whether
roughness. Whether
Whether
we used
we
we used whole-data
used whole-data analysis
whole-data analysis or
analysis or classification
or classification analysis,
classification analysis, the
analysis, the CNN
the CNN model
CNN model had
model had the
had the fastest
the fastest
fastest
Electronics 2023, 12, 981
computingspeed,
computing
computing speed,followed
speed, followedby
followed bythe
by theLSTM
the LSTMmodel,
LSTM model,and
model, andDNN
and DNNrequired
DNN requiredthe
required themost
the mosttime.
most time.If
time. IfIfan
an
an 13 of 17
accurateand
accurate
accurate andefficient
and efficientmodel
efficient modelis
model isrequired,
is required,CNN
required, CNNis
CNN isstill
is stillthe
still theideal
the idealmodel.
ideal model.
model.

Figure17.
Figure
Figure 17.Dimensional
17. Dimensional
Dimensional accuracy
Figure 17.accuracy
accuracy distribution.
distribution.
distribution.
Dimensional accuracy distribution.

Figure18.
Figure
Figure 18.Distribution
18. Distribution
Distribution graph
Figure 18.graph
graph ofdimensional
of
of dimensional
dimensional
Distribution accuracy
graph ofaccuracy
accuracy ininterms
in
dimensional terms
terms offeed
of
of feed pertooth.
in per
feed
accuracy per tooth.
tooth.
terms of feed per tooth.

Figure 19. Distribution graph of dimensional accuracy in terms of radial depth of cut.
ronics 2023,
ronics 2023, 12,
12, xx FOR
FOR PEER
PEER REVIEW
REVIEW 14 of
14 of 16
16

Electronics 2023, 12, 981 14 of 17

Figure 19.
Figure 19. Distribution
Distribution graph
graph of
of dimensional
dimensional accuracy
accuracy in
in terms
terms of
of radial
radial depth
depth of
of cut.
cut.

Figure 20.
Figure 20. Distribution
Distribution graph of dimensional
dimensional
Distribution
Figure 20.graph of accuracy
graph of accuracy in terms
dimensional
in terms of axial
accuracy
of axial depthofof
in terms
depth ofaxial
cut. depth of cut.
cut.

Figure 21.
Figure 21. Distribution
Distribution graph
Figure 21.graph of dimensional
of dimensional
Distribution accuracy
graph of accuracy in terms
in
dimensional terms of cutting
of
accuracycutting speed.
speed.
in terms of cutting speed.

The training results are presented in Figure 22. For the machining accuracy, no obvious
trend was observed during the training of the classified data. This result was different
from that observed for the surface roughness. The prediction errors of the three models
were close, and the values were within the range of 7% to 28%. When whole-data analysis
was used, the DNN model had the lowest percent error (6.99%), followed by the CNN
model (10.51%). The LSTM model performed the worst. The model calculation times are
presented in Figure 23. The result was the same as that for the surface roughness. Whether
we used whole-data analysis or classification analysis, the CNN model had the fastest
computing speed, followed by the LSTM model, and DNN required the most time. If an
accurate and efficient model is required, CNN is still the ideal model.

Figure 22.
Figure 22. Percentage
Percentage error
error of
of dimensional
dimensional accuracy
accuracy prediction.
prediction.
Electronics 2023, 12, x FOR PEER REVIEW 15 of 17

Electronics 2023, 12, 981 15 of 17

Figure 21. Distribution graph of dimensional accuracy in terms of cutting speed.


CNN DNN LSTM
60
Percentage Error (%)
50

40
27.95
30
21.93 21.26
20 14.43 16.73 16.84
11.71 11.41 11.94 10.51
10 7.33 6.99

0
0.03 0.06 0.1 ALL
Feed per tooth (mm/tooth)

Figure 22. Percentage error of dimensional accuracy prediction.


22. Percentage
Figure error
Figure 22. Percentage error of
of dimensional dimensional
accuracy accuracy prediction.
prediction.

CNN DNN LSTM


50
41.99
Computing time (Sec)

40
30.3 30.18 30.34
30
18.39
20 15.69
11.68 11.77 11.75
10 6.57 6.5 6.54

0
0.03 0.06 0.1 ALL
Feed per tooth (mm/tooth)
Figure 23.
Figure Comparisonof
23. Comparison ofcomputing
computingtime
timebetween
between models.
models.

4. Conclusions
4. Conclusions
Inthis
In thisstudy,
study, three
three different
different models
models were
were applied
applied toto extract
extract features
features from
from machining
machining
data in order to train a neural network and predict the surface roughness
data in order to train a neural network and predict the surface roughness and machining and machining
accuracy. Through
accuracy. Through the theevaluation
evaluationofofthe
theresulting
resultingdata, thethe
data, feasibility of the
feasibility neural
of the network
neural net-
for the prediction of the processing outcomes was confirmed. Additionally,
work for the prediction of the processing outcomes was confirmed. Additionally, through through the
preprocessing
the preprocessing of the data
of the dataand
anddata
datagrouping,
grouping,the theaccuracy
accuracyof of the
the prediction could be
prediction could be
improved. After comparing and discussing the results of these three models,
improved. After comparing and discussing the results of these three models, we identified we identified
the model
the model with
with aa high
high accuracy
accuracy (a(a percentage
percentage error
error of
of prediction
prediction below
below 10%)
10%) and
and the
the
shortest computation time. The results are as
shortest computation time. The results are as follows: follows:
1.
1. Thefeed
The feed per
per tooth
toothwas
wasthe
themost
mostpowerful factor
powerful affecting
factor surface
affecting roughness
surface in thisinstudy.
roughness this
Different feeds per tooth produce significant differences in the surface roughness.
study. Different feeds per tooth produce significant differences in the surface rough-
2. The machining results of stainless steel showed negative deviations in the cutting
ness.
amount (with residual material) because of the material properties. This phenomenon
2. The machining results of stainless steel showed negative deviations in the cutting
is consistent with the literature.
amount (with residual material) because of the material properties. This phenome-
3. When the surface roughness was predicted by the feed per tooth grouping, the three
non is consistent with the literature.
ANN models all had high accuracy, and the percentage errors were all below 10%.
3. When the surface roughness was predicted by the feed per tooth grouping, the three
However, if whole-data training is used for surface roughness prediction, the error
ANN models all had high accuracy, and the percentage errors were all below 10%.
percentage will increase significantly. Specifically, DNN will exceed 50%.
4. For the prediction of the dimensional accuracy, we could not determine whether
whole-data training or classified data training was preferable. However, the predic-
Electronics 2023, 12, 981 16 of 17

tions were still accurate (the percentage error was below 20%). The most accurate
prediction method is the full data training of the DNN model.
5. When the DNN model was employed to predict the machining accuracy, whole-data
training was found to have the optimal performance. On the contrary, the DNN used
for surface roughness was the worst model for whole-data training. The CNN and
LSTM models did not have substantially different prediction performances between
whole-data training and classified data training in terms of the dimension accuracy.
6. For both surface roughness prediction and machining accuracy prediction, the compu-
tation time of the CNN model is the shortest. The second shortest is the LSTM model.
The DNN model had the longest computing time.

Author Contributions: Conceptualization, M.-H.T. and J.-N.L.; methodology, M.-H.T.; software,


H.-D.T. and T.-L.H.; validation, M.-J.S.; formal analysis, H.-D.T.; investigation, H.-D.T.; resources,
H.-S.C.; data curation, H.-D.T.; writing—original draft preparation, H.-D.T.; writing—review and
editing, M.-H.T.; supervision, H.-S.C.; project administration, M.-H.T.; funding acquisition, J.-N.L. All
authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Ojha, G.K.; Yadav, G.; Yadav, P.K. Optimization technique for surface roughness prediction in turning operation. Samriddhi J. Phys.
Sci. Eng. Technol. 2014, 6, 117–124. [CrossRef]
2. Das, S.; Roy, R.; Chattopadhyay, A.B. Evaluation of wear of turning carbide inserts using neural networks. Int. J. Mach. Tools
Manuf. 1996, 36, 789–797. [CrossRef]
3. Chien, W.T.; Chou, C.Y. The predictive model for machinability of 304 stainless steel. J. Mater. Process. Technol. 2001, 118, 442–447.
[CrossRef]
4. Karabulut, S.; Sarıkaya, M. Prediction of surface roughness in milling compacted graphite iron with artificial neural network
and regression analysis. In Proceedings of the 9th International Conference on Engineering and Natural Science (ICENS 2016),
Sarajevo, Bosnia and Herzegovina, 24–28 May 2016; pp. 145–156.
5. Tsai, Y.H.; Chen, J.C.; Lou, S. An in-process surface recognition system based on neural networks in end milling cutting operations.
Int. J. Mach. Tools Manuf. 1999, 39, 583–605. [CrossRef]
6. Alique, A.; Haber, R.E.; Haber, R.H.; Ros, S.; Gonzalez, C. A neural network-based model for the prediction of cutting force in
milling process. A progress study on a real case. In Proceedings of the 2000 IEEE International Symposium on Intelligent Control.
Held jointly with the 8th IEEE Mediterranean Conference on Control and Automation (Cat. No.00CH37147), Rio Patras, Greece,
19 July 2000; pp. 121–125.
7. Cus, F.; Zuperl, U.; Milfelner, M. Dynamic neural network approach for tool cutting force modelling of end milling operations.
Int. J. Gen. Syst. 2006, 35, 603–618. [CrossRef]
8. Kadirgama, K.; Abou-El-Hossem, K.A. Prediction of cutting force model by using neural network. J. App. Sci. 2006, 6, 31–34.
[CrossRef]
9. Irgolic, T.; Cus, F.; Paulic, M.; Balic, J. Prediction of cutting forces with neural network by milling functionally graded material.
Procedia Eng. 2014, 69, 804–813. [CrossRef]
10. Zagórski, I.; Kulisz, M.; Semeniuk, A. Artificial neural network modelling of cutting force components in milling. ITM Web. Conf.
2017, 15, 02001. [CrossRef]
11. Ye, H.L. Research on Efficiency Improvement for Five-Axis Rough Machining of Aerospace Turbine Blade. Master’s Thesis,
Cheng Shiu University, Kaohsiung, Taiwan, 2019.
12. Chen, Y.W.; Huang, Y.F.; Wu, K.T.; Hwang, S.J.; Lee, H.H. Cutting force validation and volumetric errors compensation of thin
workpieces with sensory tool holder. Int. J. Adv. Manuf. Tech. 2020, 108, 299–312. [CrossRef]
13. Lu, Z.; Wang, M.; Dai, W. Machined surface quality monitoring using a wireless sensory tool holder in the machining process.
Sensors 2019, 19, 1847. [CrossRef] [PubMed]
14. Lai, X.; Yan, C.; Ye, B.; Li, W. A hybrid recurrent neural network for machining process modeling. In Proceedings of the 6th
International Symposium on Neural Networks on Advances in Neural Networks; Springer: Berlin/Heidelberg, Germany, 2009;
Volume 5551, pp. 635–642.
15. Huang, P.B. An intelligent neural-fuzzy model for an in-process surface roughness monitoring system in end milling operations.
J. Intell. Manuf. 2016, 27, 689–700. [CrossRef]
Electronics 2023, 12, 981 17 of 17

16. Huang, S.M.; Chan, Y.W.; Chang, C.H.; Kang, T.C.; Yang, C.T.; Tsai, Y.T. A holistic and local feature learning method for machine
health monitoring with convolutional bi-directional LSTM networks. In Proceedings of the 9th International Conference on
Frontier Computing (FC 2019), Kitakyushu, Japan, 9–12 July 2019; Springer: Singapore, 2019; Volume 551, pp. 382–388.
17. Huang, Z.; Zhu, J.; Lei, J.; Li, X.; Tian, F. Tool wear predicting based on multi-domain feature fusion by deep convolutional neural
network in milling operations. J. Intell. Manuf. 2020, 31, 953–966. [CrossRef]
18. Chan, Y.W.; Kang, T.C.; Yang, C.T.; Chang, C.H.; Huang, S.M.; Tsai, Y.T. Tool wear prediction using convolutional bidirectional
LSTM networks. J. Supercomput. 2022, 78, 810–832. [CrossRef]
19. Lee, J.N.; She, C.H.; Chen, S.L. Machining performance evaluation for b-type five-axis machine tool. Adv. Sci. Lett. 2012, 8,
119–124. [CrossRef]
20. Lee, J.N.; She, C.H.; Huang, C.B.; Chen, H.S.; Kung, H.K. Toolpath planning and simulation for cutting test of non-orthogonal
five-axis machine tool. Key. Eng. Mater. 2015, 625, 402–407. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like