Machining feature recognition using descriptors with range constraints for mechanical 3D models
Machining feature recognition using descriptors with range constraints for mechanical 3D models
3D models
Seungeun Lim1,a, Changmo Yeo1,b, Fazhi He2,c, Jinwon Lee3,d†, Duhwan Mun1,e*
1
School of Mechanical Engineering, Korea University, Seoul 02841, Republic of Korea
2
School of Computer Science, Wuhan University, Wuhan 430072, China
3
Department of Industrial & Management Engineering, Gangneung-Wonju National
University, Gangwon-do, 26403, Republic of Korea
a
[email protected], [email protected], [email protected], [email protected],
e
[email protected]
†Co-corresponding author
1
Abstract
In machining feature recognition, geometric elements generated in a three-dimensional
computer-aided design model are identified. This technique is used in manufacturability
evaluation, process planning, and tool path generation. Here, we propose a method of
recognizing 16 types of machining features using descriptors, often used in shape-based part
retrieval studies. The base face is selected for each feature type, and descriptors express the
base face's minimum, maximum, and equal conditions. Furthermore, the similarity in the three
conditions between the descriptors extracted from the target face and those from the base face
is calculated. If the similarity is greater than or equal to the threshold, the target face is
determined as the base face of the feature. Machining feature recognition tests were conducted
for two test cases using the proposed method, and all machining features included in the test
cases were successfully recognized. Also, it was confirmed through an additional test that the
proposed method in this study showed better feature recognition performance than the latest
artificial neural network.
2
1. Introduction
Machining features refer to specific shapes generated by cutting with machine tools in
manufacturing parts. Typical forms include holes, pockets, slots, and fillets. Machining feature
recognition means recognizing features from three-dimensional (3D) computer-aided design
(CAD) models for parts. It is used in various applications, including manufacturability
evaluation, process planning, and tool path generation.
However, the feature recognition method using descriptors proposed in the previous study had
the following limitations. First, it misrecognized specific machining features such as closed
pockets. Moreover, it could not recognize fillets and chamfers. Consequently, the feature
recognition accuracy was low, and a rule-based recognition method had to be applied separately.
This study defines the improved descriptors with enhanced information expression by applying
the concept of range constraint on the descriptors. In addition, we propose a feature recognition
method based on enhanced descriptors. The descriptor developed in the previous study focused
on the minimum constraint that the feature's base face must have. However, the improved
descriptors consider the maximum and equal conditions of the feature's base face.
3
The main contributions of this study are as follows. We propose the concept of the descriptor
using range constraints and establish machining feature recognition considering this. All target
features in the two test cases could be recognized through similarity comparison using the
improved descriptors. These characteristics increase the feature recognition rate compared to
the previous descriptor-based method. Furthermore, the proposed method in this study showed
better feature recognition performance than the latest artificial neural network in an additional
test.
This paper is organized as follows. Section 2 reviews related works on feature recognition.
Section 3 discusses the descriptors to which range constraints were applied. Section 4 presents
the feature recognition method. Section 5 discusses the experimental results of recognized
features for test cases. Finally, Section 6 presents the conclusions and prospects for future
research.
2. Related Works
Recognition of machining features from 3D CAD models has been investigated thoroughly in
the literature. Typical feature recognition methods include graph-based, volume decomposition,
hint-based, similarity-based, Hybrid, and deep learning-based approaches.
The graph-based method recognizes features by analyzing whether the subgraph matches a
specific feature pattern after expressing the relationship between the face and edge of the total
shape as a graph structure. Joshi and Chang [2] recognized machining features by using an
attributed adjacency graph (AAG) that encodes face-to-face adjacency relationships. In
addition, a heuristic method for identifying the components of a graph was proposed. However,
it had the problem of not recognizing the features of intersecting parts, such as T-slots. Chuang
and Henderson [3] proposed a method that configures a shape graph (vertex-edge graph) from
a solid model in the B-rep form, defines the regional shape patterns, and identifies patterns of
machining features. Using the vertex-edge (V-E) graph is advantageous because it is easy to
determine the patterns. However, since it only uses the V-E graph, it is limited to recognizing
shape patterns as simply interconnected face shapes. Gavankar and Henderson [4]
demonstrated that the protrusions and depressions in the edge-face graph of a B-rep model
4
comprise biconnected components and proposed a method of separating such connected
relationships. This graph theory has the advantage of the high efficiency of feature recognition,
and it is easy to add new features to be recognized. However, it had problems such as
inapplicability to blind holes and pockets that are open on two or more sides.
The volume decomposition method decomposes a volume with a complex shape into volumes
with simple shapes and then recognizes machining features from those with simple shapes[5].
Volume decomposition methods are subdivided into detailed methods such as convex
decomposition and cell-based decomposition.
The convex decomposition method generates volumes with simple shapes by decomposing a
volume with a complex shape into the convex hull and delta volumes. Tang and Woo [6]
attempted to recognize features using the alternating sum of volumes (ASV) technique, but it
is disadvantageous in that decomposition does not converge for specific shapes. Kim [7]
proposed alternating the sum of volumes with partitioning to address the drawbacks of
conventional ASV and recognized features by recognizing unique volume shapes through this
approach. The convex hull decomposition method can recognize features well, even for
intersecting features that are not recognized by graph-based and hint-based methods. However,
it has the disadvantage of not dealing with curved shapes such as rounds or fillets. The convex
hull decomposition method mainly removes fillets or rounds in advance to solve this problem.
The cell-based decomposition method identifies features after decomposing shapes into simple
cells and then composing a maximal volume by combining the cells. Kim and Mun [8]
Proposed sequential and repetitive volume decomposition methods such as fillet-round-
chamfer decomposition, wrap-around decomposition, volume split decomposition, and non-
overlapping maximal volume decomposition. These methods have the advantage that the
number of cells can be significantly reduced, and the volume can be decomposed faster than
that in maximum volume decomposition. Sakurai and Dave [9] proposed a method of
decomposing shapes into minimal cells with simple shapes by expanding the surfaces of objects
and composing a maximal volume by combining such minimal cells. This method has the
advantage that it can easily recognize many volumes. Woo [10] proposed a faster alternative to
conventional cell-based decomposition methods. Here, the maximal volume of a solid input
5
model is a large simple volume without any concave edges. This cell-based method is
advantageous because it can be used even when features intersect, same as the convex hull
decomposition method, and can recognize features even when a quadratic surface is included,
unlike the convex hull decomposition method. However, the cell-based decomposition method
has the disadvantage that it has the high time complexity to evaluate complex shapes.
The hint-based method assumes that random geometry or topology traces are left in the B-rep
model. Therefore, the hint-based method recognizes features through geometric reasoning from
the random geometry or topology trace, i.e., hints, instead of finding the complete pattern of
features. The hint-based method has the limitations that it cannot recognize rules that have not
been predefined, and individual recognition rules need to be defined for each feature.
Vandenbrande and Requicha [11] developed an algorithm based on the volume intersection
function that searches for hints from surfaces of predefined slots, holes, and pockets after
decomposing the total volume into volumes that satisfy strict manufacturing conditions. Regli
[12] developed an algorithm that explores hints using edge and vertex information rather than
faces from a model. Han and Requicha [13] proposed a recognition algorithm for recognizing
machining features such as slots and pockets. They developed the incremental feature finder
that expanded the object-oriented feature finder(OOFF) features of Vandenbrande and
Requicha [11]. Li et al. [14] proposed a hint-based approach to feature recognition for reusable
shapes. This approach obtains generalized properties of the shape for generic feature
recognition by using the shape variations or hints emerging during the modeling operations on
vertices, edges, and faces. Then, generic and interacting features are recognized using
generalized feature properties. Verma, A. and S. Rajotia [15] proposed a hint-based machining
feature recognition system for 2.5D parts with arbitrary interacting features. This system used
various algorithms to create hints for hole, linear slot, circular slot, floor-based pocket, and
floorless pocket. Ranjan et al. [16] proposed a method to obtain the contour of a 2.5D
machining part by projecting virtual ray on a virtual surface. They recognized the feature using
the information of face and volume obtained from analysis results of boundary and length of
rays. However, only orthogonal features such as plane and cylindrical surfaces are considered,
and features such as counterbore holes and countersink holes are not.
6
the similarity between two shapes of a random feature S1 and a predefined feature S2. Hong et
al. [17]first compared the overall shapes of machine parts and their detailed shapes by selecting
suitable parts. Furthermore, they proposed simplifying parts through multi-resolution modeling,
which adds or removes some parts. However, this method has the disadvantage that the overall
shape cannot be compared accurately unless the parts can be appropriately simplified. Ohbuchi
and Furuya [18] and Liu et al. [19] proposed a method comparing the similarity of shapes in
images generated by a 3D model based on multi-view. Sánchez-Cruz and Bribiesca [20]
proposed a method that measures the similarity by object transformation of a 3D model into a
voxel form. However, it has the disadvantage that it is difficult to extract a series of features
and essential elements for irregular objects such as motor vehicle parts. Yeo et al. [1] defined
base faces for machining features and corresponding descriptors. They recognized features by
measuring the similarity between descriptors of the faces comprising the input 3D CAD model
of the B-rep form and predefined descriptors. Zehtaban et al. [21] proposed a framework to
search for models similar to input CAD models. The framework uses the similarity retrieval
module based on the characteristics obtained by the Opitz coding system. Opitz coding system
is a method of technology applied in computer-aided manufacturing (CAM) for part
classification, and includes information such as geometry, topology, dimensions, material, bore,
forming, etc.
The hybrid method recognizes features using a combination of various feature recognition
methods. Sunil et al. [22] proposed a hybrid (graph-based and rule-based) method to recognize
interacting machining features from the B-rep model. In addition, they proposed a heuristic-
hint based graph extraction method to easily recognize the n-side interacting feature without
identifying the virtual links. Guo at al. [23] proposed another hybrid (graph-based and rule-
based) method and weighted attribute adjacency matrix (WAAM) model to represent the data
structure of the B-rep model.
Recently, 3D model classification and segmentation research using deep learning has been
conducted to improve the shortcomings of conventional algorithms. Jian et al. [24] suggested
an improved novel bat algorithm (NBA) combined with a graph-based method to utilize the
backpropagation algorithm to supplement an existing neural network with a long training time.
The improved NBA could extract composite features comprising similar features, which is
7
impossible with the conventional AAG method. Zhang et al. [25] proposed a new network
called PointwiseNet that combines low-level geometrics with high-level semantic information.
PointwiseNet has the advantage that it is robust to input noises and has fewer end-to-end
parameters. Lee et al. [26] suggested a 3D encoder-decoder network to regenerate 3D voxels,
including machining features. Zhang et al. [27] presented a new framework that learns the
functions of machining features using a 3D convolution neural network called FeatureNet, and
it recognized 24 machining features accurately. Shi et al. [28] proposed a deep learning
framework for feature recognition based on multiple sectional view (MSV) expressions called
MSVNet. Peddireddy et al. [29] automatically synthesized feature data and synthetic
mechanical parts to train a neural network. Furthermore, they proposed a method of identifying
the machining process by combining a 3D convolution neural network and transfer learning.
However, voxel-based expressions tend to lose information about fine parts of the model, and
has the disadvantage of low accuracy for these parts. Yeo et al. [30] proposed a method of
recognizing features by constructing a deep neural network with the descriptors extracted from
machining features as input. The deep learning-based method has higher recognition speed and
performance than the algorithm-based method, but it has the disadvantage that it requires a
sufficient amount of training data beforehand.
Kim et al. [31] proposed a deep learning-based system to retrieve piping component catalog.
This system recognizes the piping components using multi-view convolutional neural
network(MVCNN) and PointNet after splitting the point clouds. MVCNN converts the focus
of 3D objects to the focus of 2D images. PointNet classifies and subdivides the point clouds
data. Colligan et al. [32] proposed a hierarchical B-rep graph to encode B-rep data. The
hierarchical B-rep graph can represent the geometry and topology of the B-rep model and
allows the B-rep model to be used as input to neural networks. In addition, they presented
MFCAD++[33] which includes non-planar machining feature and more complex CAD model
than the previous MFCAD. Shi et al. [34] proposed SSDNet which conducts feature
segmentation and recognition using an object detection algorithm named single shot multibox
detector(SSD). SSDNet takes a view image as input and predicts the types of all features and
the 3D location in this view direction. Then the 3D bounding boxes achieved from different
view directions are combined to form the result. Table 1 compares the proposed method to
previous studies.
8
Traditional algorithm-based machining feature recognition research targets the B-rep models
because many modern 3D CAD systems apply B-rep models to represent the shape. However,
most artificial neural network-based studies, which have been reported a lot recently, use voxel
models because voxel models have simple data structure and are easy to process with artificial
neural networks. Recently, a few studies like [32] tried to use B-rep models for machining
feature recognition.
9
Table 1. Comparison of the proposed method to previous studies
Category Related papers Characteristics Comparison to our method
Graph-based Gavankar and Henderson [4] - Failure to recognize blind holes and pockets - Consideration of various holes such as counterbore and
- Support of polyhedral solids countersink holes
- Support of polyhedral and curved solids
Volume Sakurai and Dave [9] - Surfaces limited to plane and cylindrical surface - Dealing with various surfaces such as plane, cylindrical,
decomposition Woo [10] - Increase in high-time complexity when dealing with conical surfaces, etc.
complex shapes - Fast recognition of features by the use of descriptors
Hint-based Li et al. [14] - Recognition of features after decomposing an original shape - Recognition of features without decomposition
Verma and Rajotia [15] - Failure to recognize fillet and chamfer - Recognition of fillet and chamfer
Ranjan et al. [16] - Surface limited to plane and cylindrical surface - Consideration of countersink and counterbore holes
- Exclusion of counterbore and countersink holes
Similarity-based Sánchez-Cruz and Bribiesca [20] - Difficulty extracting characteristics and base elements from - Possible to extract descriptors of each face in irregular features
irregular features - High accuracy and recall
Zehtaban et al. [21] - Low accuracy and recall
Hybrid Sunil et al. [22] - Non-consideration of the island as a machining feature - Consideration of the island as a machining feature
Guo et al. [23] - Lack of test cases such as complex and interacting features - Use of test cases having complex and interacting features
Deep learning- Peddireddy et al. [29] - Unable to process features small with respect to a 3D model - Regardless of feature size with respect to a 3D model
based Colligan et al. [32] - Difficulty in deciding the view direction when specific - Recognition of machining features without additional
direction information is not available information on the direction
Shi et al. [34] - Dealing with one machining feature existing on each face of - Dealing with multiple machining features existing on each face
a 3D model of a 3D model
10
3. Descriptors for Machining Feature Recognition
3.1. Target Machining Features for Recognition
The following two assumptions were made to define the target machining features. First, the
machining types were set to milling, drilling, and turning. Second, the machining method was
set to 2.5 D. Under these two assumptions, 16 feature types were determined, as shown in Table
2: five features related to holes, two features related to slots, three features related to pockets,
two features related to islands, two features related to fillets, and two features related to
chamfers.
11
and toroidal surface. This study does not deal with the recognition of machining features in
which the shape of the base face is a free-form surface in machining over 2.5D.
Here, the characteristics of machining features include machining type, machining shape, and
machining parameters. In the case of holes, the base face varies depending on the combination
of two or more different rotational shapes. For simple holes and taper holes comprising a single
rotational shape, the hole's cylindrical face or conical face is selected as the base face. For
counterbore holes, countersink holes, and counterdrilled holes comprising two or more
different rotational shapes, a face other than the cylindrical face is selected as the base face. In
the case of slots and pockets, the bottom face or side face is selected as the base face, depending
on whether the bottom face exists. In the case of islands, the bottom face is selected as the base
face. In the case of fillets and chamfers, the cylindrical face or planar face is selected as the
base face.
12
comprises the face type (e.g., planar face, cylindrical face, toroidal face, etc.), normal vector,
and loop information (e.g., inner loop, outer loop). The edge information comprises the line
type (e.g., linear type, curved type) and length, and the vertex contains the coordinate
information. In addition, the relationships with adjacent faces can be identified. The
relationship information comprises the angle between the target face and adjacent face,
convexity, and continuity.
The descriptor is a data structure that expresses the information about the target face and the
information about relationships to adjacent topology elements. As shown in Fig. 2, the
information items expressed by the descriptor comprise the target face, outer loop, inner loop,
and auxiliary information. The target face information comprises the face type, curvature, face-
machining, fillet-machining, and chamfer-machining. The outer loop and inner loop
information comprise the convexity, continuity, parallel axis, and angle between the base face
and adjacent face (acute angle, obtuse angle, and right angle). The auxiliary information
comprises parallel information between adjacent faces, coaxial matching information, and face
interference information.
The B-rep model can be represented with different geometric elements even if it has the same
feature[35]. Especially, cylindrical shape is represented differently as one cylindrical face or
two half-cylindrical faces according to the 3D CAD system. Therefore, we consider whether
the descriptor can correspond to different expressions of the same shape when defining
descriptors. If it is difficult to correspond to others, we define various descriptors to fit the
13
feature representation. In this study, various descriptors of machining features in which base
face is cylindrical are defined (e.g., descriptors of simple hole). Also, if the base face can be of
various types, such as fillet or chamfer, various descriptors are defined.
The loop information of the descriptor includes outer loop and inner loop information. The
loop information comprises convexity item, continuity item, parallel item, acute angle item,
right angle item, and obtuse angle item. The expression format of each item is as follows:
This is interpreted as "A target face has N adjacent faces of face type Ft and convexity C in the
loop." The convexity item refers to the convexity between the target face and the adjacent face,
which can be calculated as shown in Fig. 3. If Fa and Fb are the target face and adjacent face,
14
respectively, the direction vector ⃗⃗⃗⃗
𝑑𝑟 is obtained by calculating the cross product of the normal
vector of Fa (𝑛 ⃗⃗⃗⃗𝑏 ). And then, the dot product of ⃗⃗⃗⃗
⃗⃗⃗⃗𝑎 )And the normal vector of Fb (𝑛 𝑑𝑐 and ⃗⃗⃗⃗
𝑑𝑟
(the coedge of Fa that is in contact with Fb) is calculated. If the calculated value is positivie, it
is convex, and if the value is negative, it is concave. After calculating the convexity, the
convexity item is expressed by adding the face type and the number of faces, as shown in Eq.
(1). A continuity item implies continuity between the target face and the adjacent face. Here,
continuity is distinguished between C0 and other continuity. Furthermore, the continuity item
is expressed only for the adjacent face for which the continuity is not C0. The parallel axis
item indicates whether the base vector V1 of the target face is parallel to the base vector V2 of
the adjacent face. The base vector is an axial vector if the target face is a rotational face and a
normal vector at the center of the face if it is not a rotational face. The acute, perpendicular,
and obtuse items refer to the angles formed by the base vector V1 of the target face and the base
vector V2 of the adjacent face. The expressions of these are as follows:
Auxiliary information comprises parallel item, coaxial item, and interference item. The
parallel item indicates whether the adjacent faces in contact with the outer loop of the target
face are parallel to each other. If there is a pair of adjacent faces parallel to each other, it is
expressed as 'True' or 'False.' Otherwise, it is expressed as 'False.' The coaxial item indicates
whether the adjacent face F1 in contact with the outer loop of the target face and the adjacent
15
face F2 in contact with the inner loop are rotational faces with the same axis. If this is the case,
it is expressed as 'True,' and otherwise, it is expressed as 'False.' The interference item indicates
whether there is another face in the normal direction of the target face. This information is
obtained by investigating whether a ray projected in the normal direction intersects with
another face. If they intersect, it is expressed as ‘True,’ and otherwise as ‘False.’ The above
descriptor items are summarized in Table 3.
𝐷𝑓_𝑓𝑎𝑐𝑒𝑚𝑎𝑐ℎ𝑛𝑖𝑛𝑔 Level of width between two parallel faces adjacent LONGER or SHORTER
to the base face in the outer loop. It is used for
distinguishing slots from other machining features.
𝐷𝑓_𝑓𝑖𝑙𝑙𝑒𝑡𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 Level of the width of the base face. It is used for LONGER or SHORTER
distinguishing fillets from other machining features.
𝐷𝑓_𝑐ℎ𝑎𝑚𝑓𝑒𝑟𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 Level of the width of the base face. It is used for LONGER or SHORTER
distinguishing chamfers from other machining
features.
Outer loop
𝐷𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 Convexity between the base face and adjacent face type | convexity: count
faces in the outer loop
𝐷𝑜𝑙_𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 Continuity higher than c0 between the base face and face type | convexity: count
adjacent faces in the outer loop
𝐷𝑜𝑙_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 Parallelism between the base vector (𝑣1 ) of the base face type | convexity: count
face and the base vector (𝑣2 ) of the adjacent faces
in the outer loop
𝐷𝑜𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 Perpendicularity between the base vector (𝑣1 ) of the face type | convexity: count
base face and the base vector (𝑣2 ) of adjacent faces
in the outer loop
𝐷𝑜𝑙_𝑎𝑐𝑢𝑡𝑒 Acute angle between the base vector (𝑣1 ) of the base face type | convexity: count
face and the base vector (𝑣2 ) of adjacent faces in the
outer loop
16
𝐷𝑜𝑙_𝑜𝑏𝑡𝑢𝑠𝑒 Obtuse angle between the base vector (𝑣1 ) of the face type | convexity: count
base face and the base vector (𝑣2 ) of adjacent faces
in the outer loop
Inner loops
𝐷𝑖𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 Convexity between the base face and adjacent face type | convexity: count
faces in the inner loop
𝐷𝑖𝑙_𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 Continuity higher than c0 between the base face and face type | convexity: count
adjacent faces in the inner loops
𝐷𝑖𝑙_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 Parallelism between the base vector (𝑣1 ) of the base face type | convexity: count
face and the base vector (𝑣2 ) of adjacent faces in the
outer loop
𝐷𝑖𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 Perpendicularity between the base vector (𝑣1 ) of the face type | convexity: count
base face and the base vector (𝑣2 ) of adjacent faces
in the inner loops
𝐷𝑖𝑙_𝑎𝑐𝑢𝑡𝑒 Acute angle between the base vector (𝑣1 ) of the base face type | convexity: count
face and the base vector (𝑣2 ) of adjacent faces in the
inner loops
𝐷𝑖𝑙_𝑜𝑏𝑡𝑢𝑠𝑒 Obtuse angle between the base vector (𝑣1 ) of the face type | convexity: count
base face and the base vector (𝑣2 ) of adjacent faces
in the inner loops
Auxiliary
𝐷𝑎𝑥_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 Parallelism between adjacent faces in the outer loop True or False
𝐷𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 Coaxiality between one adjacent face in the outer True or False
loop and one adjacent face in the inner loop
𝐷𝑎𝑥_𝑖𝑛𝑡𝑒𝑟𝑓𝑒𝑟𝑒𝑛𝑐𝑒 Interference of the base face caused by other faces True or False
of a model
17
Fig. 4. Possible ranges of value for each descriptor item
Considering all the above three cases leads to describing the range constraints, i.e., minimum,
maximum, and equal constraints, for each item of the descriptors for the feature base face. To
this end, the descriptors defined in Table 3. are expanded, as shown in Fig. 5, to reflect the three
range constraints for the feature base face. Here, the minimum and maximum constraints refer
to the upper and lower bounds that each item of the descriptor can have. Furthermore, the equal
constraint refers to the value that each descriptor item must have. The descriptor item for the
target face and auxiliary has a value corresponding to the equal constraint. Furthermore, the
descriptor item for the outer and inner loops has values corresponding to the minimum and
maximum constraints. The method of recognizing the machining features by applying range
constraints to the descriptor of the feature base face is explained in Section 4.
Fig. 5. Extension of the descriptor to consider range constraints for the base face of a
machining feature
18
The machining feature recognition process using descriptors is shown in Fig. 6. First, the input
B-rep model is analyzed to obtain the geometry and topology information of the faces
comprising the model. Then, one of the subtypes of the machining features predefined in Fig.
2. is selected, and features are recognized using the descriptors of the base face of the
corresponding features. One of the faces comprising the input model is selected, and the
descriptor is generated from the selected face. The generated descriptor follows the format in
Table 3. According to the user input information of machining conditions, LONGER is written
in the descriptor item 𝐷𝑓_𝑓𝑎𝑐𝑒𝑚𝑎𝑐ℎ𝑛𝑖𝑛𝑔 ,, 𝐷𝑓_𝑓𝑖𝑙𝑙𝑒𝑡𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 , , and, 𝐷𝑓_𝑐ℎ𝑎𝑚𝑓𝑒𝑟𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 , if, the,
input,value,is,larger,than,the,width,of,the,selected,face,(or,distance,between,a,pair,of,
adjacent, parallel, faces);, else , SHORTER, is, written. When descriptor generation from the
selected face is completed, the similarity between the selected descriptor of the target face and
the feature descriptor of the base face is calculated. If the similarity is greater than or equal to
the threshold, the selected face is determined as the base face of the corresponding feature. The
details of the similarity calculation between two descriptors are described in Section 4.2.
Recognition for a specific feature is terminated when the similarity calculation for every face
is completed. And then, other machining features defined in the list are recognized using the
same method. Finally, the machining feature recognition result is output.
19
4.2 Feature Recognition using Descriptors based on Range Constraints
Fig. 7 shows the procedure for comparing the similarity of each descriptor. First, a descriptor
item value is extracted from a target face after the descriptor item to compare the similarity to
is selected. Then, similarity values (𝐴𝑘 𝑚𝑖𝑛 , 𝐴𝑘 𝑚𝑎𝑥 , 𝐴𝑘 𝑒𝑞𝑢𝑎𝑙 ) for three range constraints
minimum, maximum, and equal are calculated. When the similarity values are calculated for
each range constraint, they are multiplied to obtain the similarity value 𝑆𝑘 for each descriptor
item. This process is repeated for all descriptor items. When the similarity value for each
descriptor item is calculated, the similarity value for each item is multiplied by the weight and
added to obtain the similarity value of the descriptor(R). Finally, the similarity value of the
descriptor is compared with the threshold value to determine whether the target face
corresponds to the descriptor of a specific feature.
To calculate the similarity of the two faces (𝐹 𝑖 , 𝐹𝑗 ), first, the similarities according to the
minimum, maximum, and equal constraints (𝐴𝑘 𝑚𝑖𝑛 , 𝐴𝑘 𝑚𝑎𝑥 , 𝐴𝑘 𝑒𝑞𝑢𝑎𝑙 ) is calculated for each
descriptor item, as shown in Table 4. Then, as shown in Eq. (2), the similarity 𝑆𝑘 for each
descriptor item is calculated by multiplying 𝐴𝑘 𝑚𝑖𝑛 , 𝐴𝑘 𝑚𝑎𝑥 , and 𝐴𝑘 𝑒𝑞𝑢𝑎𝑙 .
Table 4. Similarity comparison considering range constraints for each similarity item Dk
Type Definition Mathematical expression
Minimum 𝑗
If 𝐷𝑘 is more than or the same as
𝑗
1,,,,,,(𝐷𝑘𝑖 𝑚𝑖𝑛 ≤ 𝐷𝑘 )
𝑗
𝐷𝑘𝑖 The result is one. If not, the result 𝐴𝑘 𝑚𝑖𝑛, = { 0,,,,,,(𝐷𝑘𝑖 𝑚𝑖𝑛 > 𝐷𝑘 )
Equal 𝑗
If 𝐷𝑘 is the same as 𝐷𝑘𝑖 The result is 1 ,,,,(𝐷𝑘𝑖 𝑒𝑞𝑢𝑎𝑙 = 𝐷𝑘 )
𝑗
𝑗
one. If not, the result is zero. 𝐴𝑘 𝑒𝑞𝑢𝑎𝑙 = { 0 ,,,,(𝐷𝑘𝑖 𝑒𝑞𝑢𝑎𝑙 ≠ 𝐷𝑘 )
1,(𝐷𝑘𝑖 𝑒𝑞𝑢𝑎𝑙 ,𝑖𝑠,𝑛𝑜𝑡,𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑒𝑑)
When the similarity 𝑆𝑘 for the descriptor item Dk is calculated, the final similarity R for two
faces (𝐹 𝑖 , 𝐹𝑗 ) is calculated as in Eq. (3). The weight of each descriptor item is determined to
calculate the final similarity R. The sum of all weights is 1, as shown in Eq. (4). The default
value of each weight is the same.
21
𝑅= ∑ 𝑆𝑘 ∗ 𝑊𝑘 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,(3)
𝑆𝑘 ∈𝐼 W𝑘 ∈𝐽
∑ 𝑊𝑘 = 1,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,(4)
𝑊𝑘 ∈𝐽
where,
𝑊𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑊𝑓_𝑐𝑢𝑟𝑣𝑎𝑡𝑢𝑟𝑒 𝑊𝑓_𝑓𝑎𝑐𝑒𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 𝑊𝑓_𝑓𝑖𝑙𝑙𝑒𝑡𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 𝑊𝑓_𝑐ℎ𝑎𝑚𝑓𝑒𝑟𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔
𝑊𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑊𝑜𝑙_𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 𝑊𝑜𝑙_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑊𝑜𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 𝑊𝑜𝑙_𝑎𝑐𝑢𝑡𝑒 𝑊𝑜𝑙_𝑜𝑏𝑡𝑢𝑠𝑒
𝐼= ,
𝑊𝑖𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑊𝑖𝑙_𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 𝑊𝑖𝑙_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑊𝑖𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 𝑊𝑖𝑙_𝑎𝑐𝑢𝑡𝑒 𝑊𝑖𝑙_𝑜𝑏𝑡𝑢𝑠𝑒
{ 𝑊𝑎𝑥_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑊𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑊𝑎𝑥_𝑖𝑛𝑡𝑒𝑟𝑓𝑒𝑟𝑒𝑛𝑐𝑒 }
𝑆𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑆𝑓_𝑐𝑢𝑟𝑣𝑎𝑡𝑢𝑟𝑒 𝑆𝑓_𝑓𝑎𝑐𝑒𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 𝑆𝑓_𝑓𝑖𝑙𝑙𝑒𝑡𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔 𝑆𝑓_𝑐ℎ𝑎𝑚𝑓𝑒𝑟𝑚𝑎𝑐ℎ𝑖𝑛𝑖𝑛𝑔
𝑆𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑆𝑜𝑙_𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 𝑆𝑜𝑙_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑆𝑜𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 𝑆𝑜𝑙_𝑎𝑐𝑢𝑡𝑒 𝑆𝑜𝑙_𝑜𝑏𝑡𝑢𝑠𝑒
𝐽=
𝑆𝑖𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑆𝑖𝑙_𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 𝑆𝑖𝑙_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑆𝑖𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 𝑆𝑖𝑙_𝑎𝑐𝑢𝑡𝑒 𝑆𝑖𝑙_𝑜𝑏𝑡𝑢𝑠𝑒
{ 𝑆𝑎𝑥_𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑆𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑆𝑎𝑥_𝑖𝑛𝑡𝑒𝑟𝑓𝑒𝑟𝑒𝑛𝑐𝑒 }
It is necessary to determine the magnitude relationship between the descriptor item values when
calculating the similarity for each descriptor item. The calculation method is as follows. When
feature base face 𝐹 𝑖 and target face 𝐹𝑗 are compared, the descriptor item values are
𝑗
expressed as 𝐹𝑡𝑖 |𝐶 𝑖 : 𝑁 𝑖 , 𝐹𝑡 |𝐶𝑗 : 𝑁𝑗 , respectively. In this case, the magnitude relationship of
descriptor items is calculated as follows. First, when face type and convexity are the same, the
magnitude relationship between two descriptor item values is determined by the number of
faces included in each descriptor item value. In other words, when the face type (𝐹𝑡 ) and
convexity (C) of two descriptor item values are the same, the number of faces 𝑁 𝑖 and 𝑁𝑗 are
compared. The method of comparison is shown in Eq. (5). Second, when face type (𝐹𝑡𝑖 ) of the
descriptor item value of base face is ANY, the number of faces 𝑁𝑗 of the target face’s
𝑗
descriptor item value is the total number of faces (𝑁𝑘 ) of the descriptor item value of the target
𝑗
face having the same convexity (𝐶𝑘 ) as the convexity (𝐶 𝑖 ) of the corresponding descriptor item
value.
𝑗
𝐹𝑡𝑖 |𝐶 𝑖 : 𝑁 𝑖 ,𝑖𝑠,𝑙𝑒𝑠𝑠,𝑡ℎ𝑎𝑛,𝐹𝑡 |𝐶𝑗 : 𝑁𝑗 𝑖𝑓,𝑁 𝑖 , < , 𝑁𝑗
𝑗
𝐹𝑡𝑖 |𝐶 𝑖 : 𝑁 𝑖 ,𝑖𝑠,𝑔𝑟𝑒𝑎𝑡𝑒𝑟,𝑡ℎ𝑎𝑛,𝐹𝑡 |𝐶𝑗 : 𝑁𝑗 𝑖𝑓,𝑁 𝑖 > 𝑁𝑗 (5)
𝑗 𝑖
𝐹𝑡𝑖 |𝐶 𝑖 : 𝑁 𝑖 ,𝑒𝑞𝑢𝑎𝑙𝑠,𝑡𝑜,𝐹𝑡 |𝐶𝑗 : 𝑁𝑗 𝑖𝑓,𝑁 = 𝑁𝑗
𝑗 𝑗
where, 𝐹𝑡𝑖 = , 𝐹𝑡 and, ,𝐶 𝑖 = , 𝐶𝑗 for two descriptor item values ,𝐹𝑡𝑖 |𝐶 𝑖 : 𝑁 𝑖 ,and, 𝐹𝑡 |𝐶𝑗 : 𝑁𝑗
22
𝑗
𝑁𝑗 = ∑ 𝑁𝑘 𝑓𝑜𝑟,𝑎𝑙𝑙,𝑓𝑎𝑐𝑡,𝑡𝑦𝑝𝑒,𝑘 (6)
𝑗 𝑗 𝑗 𝑗
where 𝐹𝑡𝑖 = 𝐴𝑁𝑌 and 𝐶𝑘 = 𝐶 𝑖 for two descriptor item values ,𝐹𝑡𝑖 |𝐶 𝑖 : 𝑁 𝑖 ,and ,𝐹𝑘 |𝐶𝑘 : 𝑁𝑘
If the final similarity R is greater than or equal to the predefined threshold, the target face 𝐹𝑗
is considered to be the base face of a specific machining feature. If there is no value of a specific
descriptor item in the descriptor of the base face for a machining feature, the descriptor item is
excluded from the similarity calculation. As a result, Eq. (5) is adjusted so that the sum of
weights of the remaining items after excluding the corresponding item becomes 1.
Fig. 8 shows the process of comparing the descriptor 𝐷 𝑗 of the target face to be compared
with the descriptor 𝐷 𝑖 of base face of the counterbore hole to identify the base face of the
counterbore hole included in the 3D CAD model. The descriptor of the counterbore hole has
only 3, 1, and 3 descriptor items for the minimum, maximum, and equal constraint, respectively.
Therefore, only seven descriptor items are used in the similarity calculation, as explained above.
𝑖
In the descriptor items of the face information, the 𝐷𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑒𝑞𝑢𝑎𝑙 of the feature's base face
𝑗
𝐹 𝑖 is PLAN in Fig. 8(a), and the 𝐷𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑒𝑞𝑢𝑎𝑙 of the target face 𝐹𝑗 is PLAN in Fig. 8(b).
Because the two descriptors have the same value, the similarity 𝐴𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑒𝑞𝑢𝑎𝑙 becomes 1,
𝑖 𝑖
according to Table 4. However, no value is given for 𝐷𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑚𝑖𝑛 and 𝐷𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑚𝑎𝑥 .
Therefore, 𝐴𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑚𝑖𝑛 and 𝐴𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑚𝑎𝑥 become 1, according to Table 4. Finally, the
similarity 𝑆𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 of the descriptor item 𝐷𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 becomes 1 according to Eq. (2). In
the same manner, calculating the descriptor item 𝐷𝑓_𝑐𝑢𝑟𝑣𝑎𝑡𝑢𝑟𝑒 , the similarity 𝑆𝑓_𝑐𝑢𝑟𝑣𝑎𝑡𝑢𝑟𝑒
becomes 1
𝑖 𝑖
In the descriptor items of the loop information, 𝐷𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑚𝑖𝑛 of base face 𝐹 is ANY |
𝑗
CONCAVE : 1 in Fig. 8(a). Also, 𝐷𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 of the target face 𝐹𝑗 is CYLI | CONCAVE :
2 and PLAN | CONVEX: 1 in Fig. 8(b). According to the comparison method of the magnitude
𝑖 𝑗
relationship of descriptor items described above, it becomes 𝐷𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑚𝑖𝑛 , < 𝐷𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 .
23
and 𝐴𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 𝑒𝑞𝑢𝑎𝑙 are 1, according to Table 4. Finally, the similarity 𝑆𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 of the
descriptor item 𝐷𝑜𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 becomes 1, according to Eq. (2). In the same manner, calculating
the descriptor items 𝐷𝑖𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 and 𝐷𝑖𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 , the similarities 𝑆𝑖𝑙_𝑐𝑜𝑛𝑣𝑒𝑥𝑖𝑡𝑦 and
𝑆𝑖𝑙_𝑝𝑒𝑟𝑝𝑒𝑛𝑑𝑖𝑐𝑢𝑙𝑎𝑟 become 1
𝑖
Finally, in the descriptor items of auxiliary information, the 𝐷𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑒𝑞𝑢𝑎𝑙 of the feature
𝑗
base face 𝐹 𝑖 is TRUE in Fig. 8(a). Furthermore, the 𝐷𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 of the target face 𝐹𝑗 is
TRUE, in Fig. 8(b). Since these two descriptors have the same value, the similarity
𝐴𝑓_𝑓𝑎𝑐𝑒𝑡𝑦𝑝𝑒 𝑒𝑞𝑢𝑎𝑙 becomes 1, according to Table 4. However, no value is assigned to
𝑖 𝑖
𝐷𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑚𝑖𝑛 and 𝐷𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑚𝑎𝑥 . Therefore,, 𝐴𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑚𝑖𝑛 and 𝐴𝑎𝑥_𝑐𝑜𝑎𝑥𝑖𝑎𝑙 𝑚𝑎𝑥 are 1,
24
Fig. 8. Similarity comparison between a target face and the descriptor of the
counterbore hole's base face
25
Fig. 9. Composite feature represented by a combination of other features
26
When the recognition of machining features was tested using the implemented prototype
system, the weight values for similarity comparison were evenly divided. In addition, the
threshold for similarity R used to determine machining features was set to 1.
The test cases used in the machining feature recognition test are summarized in Fig. 11. Test
case 1 was created by referring to the 3D CAD model used in the DFMPro solution of H
company [37]. Furthermore, test case 2 was created by referring to the 3D CAD models used
in related studies on feature recognition [38-42]. All machining features were successfully
recognized in the two test cases using the proposed descriptor-based recognition method. In
contrast, the previous study failed to recognize specific machining features, i.e., chamfer,
simple hole, closed pocket, and floorless pocket included in the model nos. 8 and 12 of test
case 1 and models nos. 1, 7, 10, and 12 of test case 2.
27
Fig. 11. Machining feature recognition experiments for two test cases
28
The proposed method correctly recognized all machining features included in test cases 1 and
2. However, the features included in the two CAD models in Fig. 12 were not correctly
recognized. Fig. 12(a) is a floorless pocket with a steep base face, and Fig. 12(b) is a case where
two base faces were merged into one face due to interference between the two features. We
defined the value of the descriptor item 𝐷𝑎𝑥_𝑖𝑛𝑡𝑒𝑟𝑓𝑒𝑟𝑒𝑛𝑐𝑒 as 'True' to recognize the floorless
pocket. This means that there is an intersection face when a ray is projected in the direction of
the normal vector from the center of the base face. However, in the case of Fig. 12(a), there is
no intersecting face with the ray of the base face because the base face has a relatively higher
inclination. As a result, the features in Fig. 12(a) were misrecognized as opened pockets. In Fig.
12(b), the base faces of the two features were merged into one face due to interference between
the simple slot and closed pocket. As a result, the similarity was calculated for the merged face.
In this case, the two features must be recognized separately or as opened pockets, but they were
misrecognized as simple slots.
29
To compare the performance between the method proposed in this study and Hierarchical
CADNet, 40 models were selected from the MFCAD++ dataset[33] and defined as test case 3,
as shown in Fig. 13. These models were used to verify the machining feature recognition
performance. The models selected as test case 3 are 40 models listed at the top in the test model
list file among the MFCAD++ dataset.
Test case 4 was defined by excluding some models or changing some features of the models in
test cases 1 and 2. Some models in test cases 1 and 2 have a rotational stock. However,
Hierarchical CADNet does not support this type of stock. Therefore, models of this type were
excluded from test cases 1 and 2. Also, test cases 1 and 2 have models that include machining
features that Hierarchical CADNet does not support, such as opened island and counterbore
hole. Therefore, if a 3D model has counterbore holes, these were replaced with simple holes.
In addition, if a 3D model has an opened island, it was excluded from the test case. Test case
4[43], prepared as explained above, is shown in Fig. 14.
30
Fig. 14. Machining features in test case 4
The feature classification of this study and MFCAD++ are different. Accordingly, there are
cases in which two classifications need to be differently determined, even if it is the same base
face, such as slots and pockets. Therefore, new criteria for success in recognition of some
machining features are necessary in the two cases: recognizing test case 3 by applying to the
method proposed in this study and recognizing test case 4 by applying to hierarchical CADNet.
Considering these cases, the machining feature classification of test cases 3 and 4 were
reclassified into pocket type, slot type, O-ring type, hole type, chamfer type, and fillet type.
The reclassification results are shown in Fig. 15.
31
Fig. 15. Label classification of test cases 3 and 4
After reclassifying the machining feature, the criteria for recognition of slot/pocket, fillet, and
O-ring are defined as follows when recognizing test cases 3 and 4 by the proposed method in
this study and Hierarchical CADNet. First, the base faces A and B in Fig. 16(a) are classified
into pocket and slot according to the method of classifying features in this study (whether the
width of the feature matches the width of the tool). However, due to different classification
criteria, two base faces are recognized as slot type when each base face is recognized using
Hierarchical CADNet. Therefore, it was determined as a success if base face A in Fig. 16(a) is
recognized as Rectangular through slot, a slot type label, when applying test case 4 to
Hierarchical CADNet.
32
Next, the criteria for recognition success of fillet type and chamfer type were defined as follows.
A face classified as fillet type, like base face C in Fig. 16(a), can be included in other features.
Therefore, the corresponding face is recognized redundantly as a fillet type and other
machining features in the method proposed in this study. However, Hierarchical CADNet
recognizes the face as only one machining feature. Therefore, if the corresponding face is
recognized as fillet type or feature type with fillet as an adjacent face, it was determined as a
success when applying test cases 3 and 4 to Hierarchical CADNet. Also, if the corresponding
face was recognized as fillet type and feature type with fillet as an adjacent face simultaneously,
it was determined that recognition is successful when applying test cases 3 and 4 to the
proposed method. The same method is applied even when the target machining feature to be
recognized is a chamfer type.
Finally, the criterion for recognition success of the O-ring type was defined as follows. The
machining feature D in Fig. 16(b) is classified as a O-ring in MFCAD++. However, the
corresponding feature is classified as a composite feature of simple hole and closed island in
the proposed method. Therefore, if the corresponding feature is recognized as a composite
feature that consists of a simple hole and closed island, it was determined that recognition is
successful when applying test case 3 to the method proposed in this study.
Fig. 16. Faces that are differently classified between the proposed method and
Hierarchical CADNet
Through a performance comparison experiment between the proposed method and Hierarchical
CADNet using test cases 3 and 4, it was determined whether the classification result for the
faces that make up the feature was correct. First, a multi-class confusion matrix was created
after comparing the value of real classifications and the value of prediction classifications from
the recognition process. Then, Precision, Recall, Accuracy, and F1 Score were calculated based
on this confusion matrix. As shown in Table 5, the proposed method outperforms Hierarchical
CADNet in all evaluation metrics, such as Precision, Recall, Accuracy, and F1 Score.
Table 5. Recognition performance of the proposed method and Hierarchical CADNet for
test cases 3 and 4
The label Data Precision Recall Accuracy F1 Score
Our method Test case 3 0.9023 0.9676 0.9403 0.9338
Test case 4 1.0000 1.0000 1.0000 1.0000
Hierarchical Test case 3 0.8751 0.8415 0.9301 0.8579
CADNet Test case 4 0.6040 0.4099 0.4321 0.4884
Fig. 17 shows cases of recognition failure when the proposed method was applied to test case
3. In the first case, the width of the chamfer is larger than the width of the pocket, as shown in
Fig. 17(a). We used a descriptor item chamfer-machining to classify the slot, pocket, and
34
chamfer, meaning that the widths of slots or pockets should be larger than the widths of
chamfers. However, the pocket in Fig. 17(a) could not be recognized correctly because the
width of the chamfer was larger than the width of the pocket. In the next case, as shown in Fig.
17(b), the width of the pocket is smaller than the width of the slot. We used a descriptor item
face-machining to classify the slot and pocket, meaning that the width of the pocket should be
larger than the width of the slot. However, the pocket in Fig. 17(b) could not be recognized
correctly because the width of the pocket was smaller than the width of the slot.
Fig. 17. Failure cases when applying test case 3 to our method
Fig. 18 shows the cases of recognition failure when Hierarchical CADNet was applied to test
case 4. In the first case, a simple hole with two half cylindrical faces was misrecognized, as
shown in Fig. 18(a). Hierarchical CADNet is unable to recognize holes with two half
cylindrical faces because all holes in MFCAD++ dataset consist of only one cylindrical face.
In the next case, the fillet in Fig.18(b) was misrecognized among the fillets.
Fig. 18. Failure cases when applying test case 4 to Hierarchical CADNet
35
5.3. Comparison of Responsiveness to Stocks and Composite Features with Hierarchical
CADNet
Additional experiments were performed to compare the responsiveness of the proposed method
and Hierarchical CADNet to changes in the stocks or composite features. These experiments
were conducted as follows.
First, recognition experiments were performed on rotational stocks, as shown in Fig. 19(a). The
proposed method in this study recognized the rotational stock as well as the cuboid. However,
Hierarchical CADNet could not recognize the rotational stock as well as machining features in
rotational stock. Through these results, we certified that the deep learning-based method must
undergo additional learning when there is stock shape change.
Fig. 19. Additional test models used to compare the proposed method and Hierarchical
CADNet
6. Conclusions
In this study, we proposed a machining feature recognition method that compares the similarity
of feature descriptors using the concept of range constraints, including minimum, maximum,
and equal constraints. The minimum and maximum constraints refer to the lower and upper
36
bounds of the values that each descriptor item can have, respectively. Furthermore, the equal
constraint refers to the value that each descriptor item must have. After implementing the
prototype system, feature recognition tests were conducted for two test cases. The improved
descriptor supports the recognition of 16 types of machining features. The results showed that
the recognition performance improved remarkably; all machining features included in the two
test cases were successfully recognized. In addition, we confirmed that the recognition
performance of the proposed method in this study is higher than the latest artificial neural
network by comparing the two methods. We also confirmed that the proposed method has good
responsiveness to 3D models including unused stock or features in the experiment.
In the future, we plan to expand the types of features covered to recognize 3D shapes of
functional parts expressed by the combination of multiple features and general machining
features. Furthermore, the descriptors proposed in this study will be revised and improved to
solve the misrecognition problem described in Fig. 12. In addition, we will conduct studies to
recognize machining features by applying the proposed descriptors to artificial deep neural
networks such as convolutional neural networks and recurrent neural networks.
Ethical Approval
This manuscript has not been published or presented elsewhere in part or in entirety and is not
under consideration by another journal.
Consent to Participate
Not applicable
Consent to Publish
Not applicable
Authors Contributions
Seungeun Lim: Methodology, Data Curation, Software, Writing - Original Draft. Changmo
Yeo: Data Curation, Software, Writing - Original Draft. Fazhi He: Resources, Writing –
Review & Editing. Jinwon Lee: Methodology, Validation, Investigation, Writing - Review &
Editing Duhwan Mun: Supervision, Conceptualization, Methodology, Writing - Review &
Editing, Funding.
37
Acknowledgements
This research was supported by the Basic Science Research Program [No. NRF-
2022R1A2C2005879] through the National Research Foundation of Korea (NRF) funded by
the Korean government (MSIT), by the Carbon Reduction Model Linked Digital Engineering
Design Technology Development Program [No. RS-2022-00143813] funded by the Korean
government (MOTIE), and by Institute of Information & communications Technology
Planning & evaluation (IITP) grant funded by the Korea government (MSIT) [No.2022-0-
00969].
Competing Interests
The authors declare no potential conflicts of interest with respect to the research, authorship,
and publication of this article.
References
[1] C. Yeo, S. Cheon, D. Mun, Manufacturability evaluation of parts using descriptor-based
machining feature recognition, International Journal of Computer Integrated Manufacturing,
34 (2021) 1196-1222. https://doi.org/10.1080/0951192X.2021.1963483.
[2] S. Joshi, T.-C. Chang, Graph-based heuristics for recognition of machined features from a
3D solid model, Computer-aided design, 20 (1988) 58-66. https://doi.org/10.1016/0010-
4485(88)90050-4.
[3] S. Chuang, M.R. Henderson, Three-dimensional shape pattern recognition using vertex
classification and vertex-edge graphs, Computer-Aided Design, 22 (1990) 377-387.
https://doi.org/10.1016/0010-4485(90)90088-T.
[4] P. Gavankar, M.R. Henderson, Graph-based extraction of protrusions and depressions from
boundary representations, Computer-Aided Design, 22 (1990) 442-450.
https://doi.org/10.1016/0010-4485(90)90109-P.
[5] B.C. Kim, D. Mun, Stepwise volume decomposition for the modification of B-rep models,
The International Journal of Advanced Manufacturing Technology, 75 (2014) 1393-1403.
https://doi.org/10.1007/s00170-014-6210-z.
38
[6] K. Tang, T. Woo, Algorithmic aspects of alternating sum of volumes. Part 1: Data structure
and difference operation, Computer-Aided Design, 23 (1991) 357-366.
https://doi.org/10.1016/0010-4485(91)90029-V.
[7] Y.S. Kim, Recognition of form features using convex decomposition, Computer-Aided
Design, 24 (1992) 461-476. https://doi.org/10.1016/0010-4485(92)90027-8.
[8] B.C. Kim, D. Mun, Enhanced volume decomposition minimizing overlapping volumes for
the recognition of design features, Journal of Mechanical Science and Technology, 29 (2015)
5289-5298. https://doi.org/10.1007/s12206-015-1131-9.
[9] H. Sakurai, P. Dave, Volume decomposition and feature recognition, Part II: curved objects,
Computer-Aided Design, 28 (1996) 519-537. https://doi.org/10.1016/0010-4485(95)00067-4.
[10] Y. Woo, Fast cell-based decomposition and applications to solid modeling, Computer-
Aided Design, 35 (2003) 969-977. https://doi.org/10.1016/S0010-4485(02)00144-6.
[11] J.H. Vandenbrande, A.A. Requicha, Spatial reasoning for the automatic recognition of
machinable features in solid models, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 15 (1993) 1269-1285. https://doi.org/10.1109/34.250845.
[12] W.C. Regli III, Geometric algorithms for recognition of features from solid models, in,
University of Maryland, College Park, 1995.
[13] J. Han, A.A. Requicha, Feature recognition from CAD models, IEEE Computer Graphics
and Applications, 18 (1998) 80-94. https://doi.org/10.1109/38.656791.
[14] H. Li, Y. Huang, Y. Sun, L. Chen, Hint-based generic shape feature recognition from three-
dimensional B-rep models, Adv Mech Eng, 7 (2015)
https://doi.org/10.1177/1687814015582082.
[15] A. Verma, S. Rajotia, A hint-based machining feature recognition system for 2.5 D parts,
International journal of production research, 46 (2008) 1515-1537.
https://doi.org/10.1080/00207540600919373.
[16] R. Ranjan, N. Kumar, R.K. Pandey, M. Tiwari, Automatic recognition of machining
features from a solid model using the 2D feature pattern, The International Journal of Advanced
Manufacturing Technology, 26 (2005) 861-869. https://doi.org/10.1007/s00170-003-2059-2.
[17] T. Hong, K. Lee, S. Kim, Similarity comparison of mechanical parts to reuse existing
designs, Computer-Aided Design, 38 (2006) 973-984.
https://doi.org/10.1016/j.cad.2006.05.004.
[18] R. Ohbuchi, T. Furuya, Scale-weighted dense bag of visual features for 3D model retrieval
from a partial view 3D model, in: 2009 IEEE 12th International Conference on Computer
39
Vision Workshops, ICCV Workshops, IEEE, 2009, pp. 63-70.
https://doi.org/10.1109/ICCVW.2009.5457716.
[19] Y.-J. Liu, X. Luo, A. Joneja, C.-X. Ma, X.-L. Fu, D. Song, User-adaptive sketch-based 3-
D CAD model retrieval, IEEE Transactions on Automation Science and Engineering, 10 (2013)
783-795. https://doi.org/10.1109/TASE.2012.2228481.
[20] H. Sánchez-Cruz, E. Bribiesca, A method of optimum transformation of 3D objects used
as a measure of shape dissimilarity, Image and Vision Computing, 21 (2003) 1027-1036.
https://doi.org/10.1016/S0262-8856(03)00119-7.
[21] L. Zehtaban, O. Elazhary, D. Roller, A framework for similarity recognition of CAD
models, J Comput Des Eng, 3 (2016) 274-285. https://doi.org/10.1016/j.jcde.2016.04.002.
[22] V. Sunil, R. Agarwal, S. Pande, An approach to recognize interacting features from B-Rep
CAD models of prismatic machined parts using a hybrid (graph and rule based) technique,
Comput Ind, 61 (2010) 686-701. https://doi.org/10.1016/j.compind.2010.03.011.
[23] L. Guo, M. Zhou, Y. Lu, T. Yang, F. Yang, A hybrid 3D feature recognition method based
on rule and graph, Int J Comput Integ M, 34 (2021) 257-281.
https://doi.org/10.1080/0951192X.2020.1858507.
[24] C. Jian, M. Li, K. Qiu, M. Zhang, An improved NBA-based STEP design intention feature
recognition, Future Generation Computer Systems, 88 (2018) 357-362.
https://doi.org/10.1016/j.future.2018.05.033.
[25] D. Zhang, F. He, Z. Tu, L. Zou, Y. Chen, Pointwise geometric and semantic learning
network on 3D point clouds, Integrated Computer-Aided Engineering, 27 (2020) 57-75.
https://doi.org/10.3233/ICA-190608.
[26] H. Lee, J. Lee, H. Kim, D. Mun, Dataset and method for deep learning-based
reconstruction of 3D CAD models containing machining features for mechanical parts, Journal
of Computational Design and Engineering, 9 (2022) 114-127.
https://doi.org/10.1093/jcde/qwab072.
[27] Z. Zhang, P. Jaiswal, R. Rai, Featurenet: Machining feature recognition based on 3d
convolution neural network, Computer-Aided Design, 101 (2018) 12-22.
https://doi.org/10.1016/j.cad.2018.03.006.
[28] P. Shi, Q. Qi, Y. Qin, P.J. Scott, X. Jiang, A novel learning-based feature recognition
method using multiple sectional view representation, Journal of Intelligent Manufacturing, 31
(2020) 1291-1309. https://doi.org/10.1007/s10845-020-01533-w.
40
[29] D. Peddireddy, X. Fu, H. Wang, B.G. Joung, V. Aggarwal, J.W. Sutherland, M.B.-G. Jun,
Deep learning based approach for identifying conventional machining processes from CAD
data, Procedia Manufacturing, 48 (2020) 915-925.
https://doi.org/10.1016/j.promfg.2020.05.130.
[30] C. Yeo, B.C. Kim, S. Cheon, J. Lee, D. Mun, Machining feature recognition based on deep
neural networks to support tight integration with 3D CAD systems, Scientific reports, 11 (2021)
1-20. https://doi.org/10.1038/s41598-021-01313-3.
[31] H. Kim, C. Yeo, I.D. Lee, D. Mun, Deep-learning-based retrieval of piping component
catalogs for plant 3D CAD model reconstruction, Comput Ind, 123 (2020) 103320.
https://doi.org/10.1016/j.compind.2020.103320.
[32] A.R. Colligan, T.T. Robinson, D.C. Nolan, Y. Hua, W. Cao, Hierarchical CADNet:
Learning from B-Reps for Machining Feature Recognition, Comput Aided Design, 147 (2022)
103226. https://doi.org/10.1016/j.cad.2022.103226.
[dataset] [33] R.T. Colligan AR, Nolan DC, Hua Y, Cao W., MFCAD++ Dataset, 2022.
https://doi.org/10.17034/d1fec5a0-8c10-4630-b02e-b92dc81df823
[34] P. Shi, Q. Qi, Y. Qin, P.J. Scott, X. Jiang, Intersecting machining feature localization and
recognition via single shot multibox detector, IEEE Transactions on Industrial Informatics, 17
(2020) 3292-3302. https://doi.org/10.1109/TII.2020.3030620.
[35] S. Gerbino, Tools for the interoperability among CAD systems, in: Proc. XIII ADM-XV
INGEGRAF Int. Conf. Tools and Methods Evolution in Engineering Design, 2003.
[36] OpenCASCADE, Open Cascade Technology. http://www.opencascade.com (accessed 1
May 2022).
[37] DFMPro. HCL Technologies Ltd. https://dfmpro.com (accessed 1 May 2022).
[38] M.K. Gupta, A.K. Swain, P.K. Jain, A novel approach to recognize interacting features for
manufacturability evaluation of prismatic parts with orthogonal features, The International
Journal of Advanced Manufacturing Technology, 105 (2019) 343-373.
https://doi.org/10.1007/s00170-019-04073-7
[39] J. Han, M. Pratt, W.C. Regli, Manufacturing feature recognition from solid models: a status
report, IEEE transactions on robotics and automation, 16 (2000) 782-796.
https://doi.org/10.1109/70.897789
[40] F. Ning, Y. Shi, M. Cai, W. Xu, X. Zhang, Manufacturing cost estimation based on the
machining process and deep-learning method, Journal of Manufacturing Systems, 56 (2020)
11-22. https://doi.org/10.1016/j.jmsy.2020.04.011
41
[41] Q. Wang, X. Yu, Ontology based automatic feature recognition framework, Computers in
Industry, 65 (2014) 1041-1052. https://doi.org/10.1016/j.compind.2014.04.004
[42] Y. Zhang, X. Luo, B. Zhang, S. Zhang, Semantic approach to the automatic recognition of
machining features, The International Journal of Advanced Manufacturing Technology, 89
(2017) 417-437. https://doi.org/10.1007/s00170-016-9056-8
[dataset] [43] S. Lim, C. Yeo, D. Mun, Dataset of 3D CAD models used for descriptor-based
machining feature recognition, 2022. https://www.dhmun.net/home/Research_Data.
42