A Simplified Benchmarking Model For The Assessment of Dimensional Accuracy in FDM Processes
A Simplified Benchmarking Model For The Assessment of Dimensional Accuracy in FDM Processes
net/publication/288179014
CITATIONS READS
7 929
2 authors, including:
Nathan Decker
University of Southern California
6 PUBLICATIONS 9 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Nathan Decker on 11 April 2017.
Decker, N. and Yee, A. (2015) ‘A Simplified Benchmarking Model for the Assessment of Dimensional Accuracy in
FDM Processes’, Int. J. Rapid Manufacturing, Vol. 5, No.2, pp.145–154.
http://dx.doi.org/10.1504/IJRAPIDM.2015.073573
As the number of applications for additive manufacturing has increased in recent years, so has
the number of benchmarking objects used in the assessment of additive manufacturing processes.
These objects enable the assessment of printer and material performance in a number of areas;
one of which is dimensional accuracy. Many of these objects, however, take considerable time
and material to produce. Further, a large proportion of the volume of these test objects does not
directly contribute to the assessment of dimensional accuracy. This paper proposes a new
benchmarking object that addresses these issues. The object proposed in this paper is compared
with two other commonly used benchmarking objects in the areas of mass, print time, volume,
proportion of volume dedicated to measurable test features, and feature quantity. Further,
conditions under which a simplified benchmarking object might be employed are evaluated.
1. Introduction
Since its emergence in the late 1980s, the field of Additive Manufacturing (AM) has grown
exponentially (Wohlers 2014). With this growth has come an increasingly diverse array of
manufacturing processes and materials available to the consumer.
The desire to evaluate and draw comparisons across the vast field of AM machines and materials
has necessitated the use of benchmarking processes (Cruz Sanchez et al. 2014). These enable the
comparison and improvement of machines, materials, and process parameters based upon
preselected criteria, evaluating areas such as dimensional accuracy, mechanical performance, and
consumer worthiness (Islam et al. 2013, Robertson et al. 2013).
An important part of most benchmarking processes in additive manufacturing is the use of a
benchmarking object. These objects are produced and then measured for a variety of purposes
(Scaravetti et al. 2008). Benchmarking objects have been used to measure the dimensional
accuracy of AM produced parts (Johnson et al. 2011). They have also been used to measure the
mechanical properties and surface finishes produced by AM processes (Mahesh 2004). Because
of these many purposes, a wide variety of test objects exist, each with its own intended
specialization (Mahesh 2004). This paper proposes a benchmarking object design intended to
measure dimensional accuracy in fused deposition modeling (FDM) processes.
Parts designed to assess the dimensional accuracy of an AM system must meet a number of
criteria. Moylan et al. list several important criteria for benchmarking objects. The object should
be large enough to evaluate a printer over its entire build envelope, have a wide range of feature
sizes, and utilize both bosses and voids. Further, it should be easily measured, quick to build, and
should not require excessive amounts of print material (Moylan et al. 2012).
All six of these criteria might be important when the benchmarking object under consideration is
intended to evaluate additive manufacturing systems or process parameters. Situations exist,
however, where a benchmarking object that favors a short build time and minimal material usage
over print envelope coverage is advantageous.
With this in mind, this paper proposes a new benchmarking object designed to facilitate the
measurement of dimensional accuracy in Fused Deposition Modeling (FDM) while reducing
print time and material usage. This benchmarking object is then compared with two other
benchmarking objects in the areas of mass, print time, volume, proportion of volume dedicated
to measurable test features, and feature quantity.
Figure 1. Flowchart of Possible Criteria for Selection of a Simplified Benchmarking
Object
2. Benchmarking Object
The benchmarking object proposed in this paper (see Figure 2) is intended to assess dimensional
print accuracy. For the dimensions of the benchmarking object, see Figure 3. Cruz Sanchez et al.
describe four aspects of dimensional accuracy that a benchmarking object can measure: XY
Plane, Z Axis, Circular Features, and Thin Walls (Cruz Sanchez et al. 2014). This benchmarking
object is designed to measure the first three of these aspects.
Figure 2. Proposed benchmarking object
In order to achieve this purpose, the object has two cylindrical bosses, one cylindrical void, and
one rectangular void. It also has two inclines designed to reveal the stair-step effect (Kruth
2005). This effect occurs when a 3D printer creates a slope by fabricating incremental jumps in
height that resemble a staircase (Kattethota and Henderson 1998). Finally, the benchmarking
object has two overhanging planes to test the ability of the object to be printed without support
(Johnson et al. 2011).
Because the proposed benchmarking object is not intended to cover the entirety of a printer’s
build envelope, its overall volume is much smaller than what would otherwise be necessary.
Further, a substantial proportion of this object’s overall volume is dedicated to critical features.
Because accessibility to a coordinate measurement system or a pair of calipers has been
considered an important design requirement, the object was designed to be measured quickly and
easily at the expense of minimal feature size (Mahesh 2004). This design element was found to
be especially important when attempting to measure the diameter of the cylindrical bosses at
various points.
3. Methods
In order to assess the proposed part’s print time, material usage, and feature diversity, it was
compared with two other benchmarking objects. The first object (see Figure 4) was proposed by
Moylan et al. for the benchmarking of AM machines and processes (Moylan et al. 2012). This
test object has also been used in modified forms (Perez et al. 2013, Cruz Sanchez 2014). It
features a large number of cylindrical and rectangular bosses and voids. It also has an inclined
plane to measure the stair-step effect.
The second object used for comparison (see Figure 5) has been utilized in previous research to
evaluate the dimensional accuracy and run time of several FDM machines (Grimm 2003). It has
also been used for these purposes in modified forms (Robertson 2013, Choi et al. 2011). It
features two rectangular bosses, a rectangular void, two cylindrical bosses, and a cylindrical
void. For the purposes of this paper, the object was scaled to 80% of its original size. This
allowed the object to be printed on our 3D printer, and enabled a more equitable comparison
between the objects. This scaling is reflected in the calculations of volume, mass, and material
use throughout the paper.
All three objects were printed on an UP Plus 2 3D printer using ABS filament. The printer was
set to its default settings, layer thickness was set to 0.25mm, fill was set to loose, and print
quality was set to normal.
Print time was measured from the moment that the printer’s nozzle first made contact with the
build platform to the moment that the print was completed. The mass of the object was measured
using an electronic scale both before and after it was removed from the ABS raft created by the
printer. This allowed for a more complete assessment of the amount of material used to create
the benchmarking object.
The proportion of each object’s overall volume dedicated to measurable test features was
calculated from the dimensions provided in Grimm 2003, and from an .stl file for the object
proposed by Moylan et al. 2012 provided by the National Institute for Standards and Technology
(NIST). This calculation is intended to reflect the efficiency of the object’s material use. For the
purpose of calculation, a measurable test feature was defined as any boss protruding from the
body of the benchmarking object. For the proposed benchmarking object this included both
cylinders, and all four triangular prisms.
Finally, feature diversity was quantified by determining the number of each type of test feature
found on each benchmarking object. For the benchmarking object proposed in Moylan et al.
2012, the lateral voids on the side of the object were counted as overhanging and inclined planes.
This reflects the intention for these features stated in Moylan et al. 2012. The two cylindrical
voids counted as having one overhanging and one inclined plane each. The quadrilateral voids
counted as having two overhanging and two inclined planes each.
4. Results
The benchmarking object proposed by Moylan et al. had the greatest overall volume,
mass both with and without the ABS raft, and print time. The object used in
Grimm 2003 had the second greatest volume, mass, and print time, followed by the object
proposed in this paper (Figure 6, 7, 8). Because the proposed object has large test features with a
small volume of material used to tie them together, it has the largest proportion of overall
volume dedicated to measurable test features. The object used in Grimm 2003 had the second
largest proportion of volume dedicated to measurable test features. This was partially due to the
size of its test features. Finally, the object proposed by Moylan et al. had the lowest overall
proportion of its volume dedicated to measurable test features. This is due to the large size of the
object, the small size of the test features, and the large number of voids with unused space
between them (Figure 9).
The benchmarking object proposed by Moylan et al. had the greatest overall feature diversity
with at least ten of each feature type. The benchmarking object proposed in this paper, and the
benchmarking object used in Grimm 2003 had roughly similar feature diversity, though the
benchmarking object proposed in this paper focused more on inclined and declined planes than
rectangular bosses. For these results, see Table 1.
Figure 7. Mass of each benchmarking object with and without the ABS raft
The above results make the strengths and weaknesses of the proposed benchmarking object
apparent. For a summary of these strengths and weaknesses, see below.
Strengths:
Weaknesses:
References
Barclift, M. and Williams, C. (2012) ‘Examining Variability in The Mechanical Properties of
Parts Manufactured Via Polyjet Direct 3D Printing’ in Proceedings of the Solid Free-
Form Fabrication Symposium, Austin, Texas, pp.876–890.
Choi, J., Medina, F., Kim, C., Espalin, D., Rodriguez, D., Stucker, B. and Wicker, R. (2011)
‘Development of a mobile fused deposition modeling system with enhanced
manufacturing flexibility’, Journal of Materials Processing Technology, Vol. 211, No. 3,
pp.424-432.
Cruz Sanchez, F.A., Boudaoud, H., Muller, L. and Camargo, M. (2014) ‘Towards a standard
experimental protocol for open source additive manufacturing’, Virtual and Physical
Prototyping, Vol. 9, No. 3, pp.151-167.
Frascati, J., (2002). Effects of Position, Orientation, and Infiltrating Material on Three
Dimensional Printing Models. Unpublished MS thesis, University of Central Florida,
Orlando, Florida.
Grimm, T. (2003) ‘Fused deposition modeling: A technology evaluation’, Time Compression
Technologies, Vol. 2, No. 2, pp.1-6.
Islam, M.N., Boswell, B. and Pramanik, A. (2013) ‘An Investigation of Dimensional Accuracy
of Parts Produced by Three-Dimensional Printing’ in Proceedings of the World Congress
on Engineering, London, U.K., pp.522-525.
Johnson, W.M., Rowell, M., Deason, B. and Eubanks, M. (2011) ‘Benchmarking evaluation of
an Open Source Fused Deposition Modeling Additive Manufacturing System’ in
Proceedings of the Solid Free-Form Fabrication Symposium, Austin, Texas, pp.197–211.
Kattethota, G. and Henderson, M. (1998) Design Tool to Control Surface Roughness in Rapid
Fabrication. http://prism.asu.edu/publications/papers/paper98_dtcsrrf.pdf (Accessed 11
June 2015).
Kruth, J.P., Vandenbroucke, B., Van Vaerenbergh, J. and Mercelis, P. (2005) ‘Benchmarking of
Different SLS/SLM Processes As Rapid Manufacturing Techniques’ in International
Conference on Polymers and Moulds Innovations (PMI), Gent, Belgium, pp.1–7.
Mahesh, M. (2004). Rapid Prototyping and Manufacturing Benchmarking. Unpublished PhD
Thesis, National University of Singapore, Singapore.
Moylan, S., Slotwinski, J., Cooke, A., Jurrens, K. and Donmez, A. (2012) ‘Proposal for a
Standardized Test Artifact for Additive Manufacturing Machines and Processes’ in
Proceedings of the Solid Free-Form Fabrication Symposium, Austin, Texas, pp.902–920.
NIST Additive Manufacturing Test Artifact. [online]
http://www.nist.gov/el/isd/sbm/amtestartifact.cfm (Accessed 1 August 2015).
Perez, M.A., Ramos, J., Espalin, D., Hossain, M.S. and Wicker, R.B. (2013) ‘Ranking Model for
3D Printers’ in Proceedings of the Solid Free-Form Fabrication Symposium, Austin,
Texas, pp.1048-1065.
Robertson, D.A., Espalin, D. and Wicker, R.B. (2013) ‘3D printer selection: A decision-making
evaluation and ranking model’, Virtual and Physical Prototyping, Vol. 8 No. 3, pp.201-
212.
Scaravetti, D., Dubois, P. and Duchamp, R. (2008) ‘Qualification of Rapid Prototyping Tools:
Proposition of a Procedure and a Test Part’, International Journal of Advanced
Manufacturing Technology, Vol. 38, No. 7-8, 683-690.
Wohlers, T.T. (2014) Wohlers Report 2014: 3D Printing and Additive Manufacturing State of
the Industry Annual Worldwide Progress Report, Wohlers Associates Inc., Fort Collins,
CO.