ModelingOfUnsegmentedCloudPointData RP SanjayShukla
ModelingOfUnsegmentedCloudPointData RP SanjayShukla
INCAST 2008-126
MODELING OF UNSEGMENTED CLOUD POINT DATA FOR RAPID
PROTOTYPING
ABSTRACT
For direct slicing the cloud point data a novel algorithm Intermediate Skeletal Model (ISM) is proposed in the present
work. For establishing connectivity among feature points the ISM algorithm grows a sphere recursively over the data
set. For an unsegmented cloud point data the approach of using a unified spheres radius results in loss of feature
information. The adaptive approach for segmentation can capture the feature information and gives more flexibility as
the quadric evaluated can be disc shape at near edge regions, ellipsoidal shape at convex/concave features or can be a
sphere in planar regions i.e. regions which encounter less curvature features. The modified algorithm estimates the
radius of sphere to be grown on a node in the real time. At the regions of higher curvature, the principal axes are small
and at relatively planar regions larger principal axes are used.
1. INTRODUCTION
Reverse Engineering (RE) is a process, which enables to generate a computerized representation of a physical object
based on the collected point data information from the objects surface. The process is useful (1) when an artist makes
the design of an object and it has to be captured as a CAD model for further analysis. (2) When a product design
requires frequent changes in the development cycle and the initial design data become obsolete. (3) When spare parts
for a product are needed and its engineering design is lost. [4]
In the process of generating a surface model from point cloud data the segmentation is a process that partitions the
cloud point data by extracting the edges. The process plays an important role in fitting surface patches and applying
the scan data to the rapid manufacturing process. Woo et al. [4] proposed a segmentation algorithm that archives the
cloud point data to an octree data structure. A bounding box enclosing the data points is divided to an initial grid
according to user-defined parameter. The standard deviation of normal at the data points in the initial grid blocks is
used as a decision criterion to further divide the each of the initial grid box into eight boxes by halving the three axes.
The partitioning edges are removed by removing the boxes of user-defined size. The separation between regions is
performed automatically by detecting any discontinuities using the gaps created by the removed cells. Yang and Lee
[5] proposed a parametric quadric surface approximation method to estimate the local surface curvature properties.
The surface curvatures and the principal directions are computed from the locally approximated surfaces. Edge points
are identified as the curvature extremes, and zero crossings, which are found from the estimated surface curvatures.
After edge points are identified, edge-neighborhood chain-coding algorithm is used to for form boundary curves.
The next step in RE pipeline is modeling. A structure on the set of points is needed in order to make explicit the
proximity relationships between points on the surface of the object. The modeling techniques are a means to define the
underlying structure contained by the cloud point data. The modeling techniques used in layered manufacturing for
defining the underlying surface can be categorized in three broad categories: a) Surface Interpolation, b) Surface
Reconstruction; and 3) Direct Slicing.
In recent years, some approaches have been suggested to fit a cloud point data with a skeleton rather than a full-
fledged corpus. The shape errors in the final model can crop in during the modeling, tessellation or slicing. As in
direct slicing, a layer-based model is directly generated from the cloud data, which is very close to the final RP model.
Therefore, there is basically only one source of shape error. If this error can be controlled effectively, this approach
will have the advantage over the other two modeling approaches in terms of shape error control on the RP model [6].
2
Wu et al. have reported in their work [6] an adaptive approach for finding the contour/slice information. The general
procedure of the algorithm starts model building along a user specified slicing direction. The initial layer is obtained
from one end with a sufficiently small thickness. The mid-plane in the initial layer is used as the projection plane and
the points within the layer are projected onto this plane. The 2D points on the plane are used to construct a closed
polygonal curve and the distances between the points and the polygon are then calculated as the actual shape errors
(actual). If actual is smaller than the given shape tolerance given; the layer thickness is adaptively increased. The first
layer is thus constructed by extruding the polygon along the slicing direction with the determined maximum allowable
thickness. The subsequent layers are constructed in the same manner. Liu et al. have reported an error based
segmentation approach to arbitrarily scattered cloud data [2], [3]. The 3D data cloud is adaptively subdivided with a
cluster of parallel planes. This cluster of subdivision plane is defined according to a user specified plane and its
normal direction. The feature points in the subdivided regions are extracted and connected to nearest four feature
points in and across different regions. The interpolation between the feature points generates skeletal representation of
the prototypes surface.
The direct slicing methods are very close to final RP model and controls error metering effectively, but the
mathematical treatment of subject via. projection of points brings in inherent assumptions and so a probability of
seeping in error to the generated model. The skeletal model lacks in the topological information and hence it becomes
imperative to have a data structure, which can be queried at a later stage for extracting the feature information.
The rest of paper is organized as follows. Section 2 discusses the novel Intermediate Skeletal Model (ISM) algorithm
developed in present work for generating the skeletal model. In section 3 modified ISM algorithm for modeling
unsegmented cloud point data is given. Section 4 presents the case studies to show the efficacy of the approach.
Section 4 summaries the present work, contribution made and give some suggestion for future research and
development.
2. MODELING
Segmentation cum modeling technique is suggested in the present work. The modeling of cloud is a based on an
analogy of wave propagation according to HuygensFresnel principle. An advancing front on a seed point is
approximated to be spherical in shape and term the points falling within the sphere as active points then the local
skeletal model can be seen as the connection between the seed point and the active points. The active points are centre
of fresh disturbances and form an advancing front, which grows to form a full-fledged skeletal model on the cloud
point data. We can relate the medium changes the wave encounters with the curvature changes in the cloud point data.
Varying the radius of the sphere grown on the active point dynamically the approach can capture the curvature
information in the data points. The approach suggested does the modeling and segmentation simultaneously and thus
does not require separate instantiation of the two procedures.
(a) (b)
Figure 3.1. (a) Filleted edge with constant radius of sphere, (b) Extraction of curvature information by the ellipsoid
The following properties are desired for the quadric to enable it capture intricate feature information. The quadric
should be banded i.e. it should capture the point in a thin band. The quadric should capture sufficient number of points
and the quadric should be balanced, this property ensures that the quadric capture points from all directions in 3D and
results in a balanced or a well connected skeletal model that does not sacrifice on the feature information.
The objective function is designed such that the above desired properties are represented well in the function:
a) For banding the data points in a thin region an average projection error [2] in 3 coordinate planes is taken as a
function to be minimized. All through the execution of the ISM algorithm the average projection error is kept within
the designer specified bound. Figure 3.3 shows a case of average projection error band. The ellipsoidal band for the
average projection error is used for representation ease and the error band can be of any generic shape.
The point set within the ellipsoid influence are
projected in the three reference planes defined
by vectors x , y and z . The centre of the
projected data set in each plane is calculated
1 n
by Pc = Pi where; Pc is the center of
n i =1
the data set , Pi is the ith data point and n is
the number of points in the data set X. Assume
that point O is the closest to Pc ,
denote a plane associated with the centre point O X and unit normal vector perpendicular to reference plane by
T O, n to be the projecting plane of data set X. The projecting error of point Pi X to T O, n is then calculated
T
by dist ( p, T ) = .( p O)
. The average projection error of the data points in the region to the projection plane is
n
4
b) For balancing the quadric the total solid angle subtended by the surface fitted onto the data points within the
quadrics influence region i.e. active points, is taken as an objective function to be maximized. For a single point
within the quadric the solid angle is taken a zero while for two points the solid angle is taken as the angle subtended
by the interpolating line between the data points. If three or more point are registered in the quadric the points are
triangulated and the solid angle is the sum total of the angle subtended by each of such triangles. An efficient
algorithm for calculating the solid angle subtended by a triangle with vertices A, B and C, as seen from the origin
can be found in work of Oosterom and Strackee [1].
r rr
The solid angle by the triangle is given as [1]:
r rr
= 2 tan 1
rr
[abc]
rr rr
where; [abc] denotes
abc + (a.b)c + (a.c)b + (b.c)a
ur
the determinant of the matrix that results when writing the vectors together in a row, e.g. M i1 = ai and so on, this is
r
also equivalent to the scalar triple product of the three vectors; a is the vector representation of point A, while a is the
rr
module of that vector (the origin-point distance); a.b denotes the scalar product. A balanced quadric can be
evaluated by maximizing the function.
n2
max ( delaunayPi ) , where; ( delaunayPi ) represents the solid angle by ith delaunay triangle of the P data set.
i =1
c) Non-Degeneracy: The quadric grown on the node is desired to enclose sufficient number of points within its
region of influence. The Non-Degeneracy condition avoids quadric void of any points. The number of points inside
the quadric of influence to be maximized in our objective function. The resultant mathematical formulation for the
objective function can be given as:
1 1/ 2
n2
min ( dist ( p , T )) 2
+ max ( delaunayPi ) + max( s )
i = x , y , z n pP
i
i =1
The quadric of influence is forced to be spherical and so the objective function is penalized by the amount the moment
of inertia of the quadric differs from the moment of inertia of sphere defined by the radius equal to the average of
quadrics principal axes. The penalty value is normalized to range in 0 to 1.
2 2 2
I xxq I yy I zzq
q
, where subscript denotes the axis along which moment of inertia is calculated and
S 1 + S 1 + S 1 = 0
I xx I yy I zz
the superscript s and q denotes the sphere and quadric respectively. The modeling starts initially with a constant radius
of sphere. The standard deviation of directed vectors from a seed point (node) to the active points falling within the
influence of quadric is estimated. If the standard deviation of the directed vector is more than a preset value the
optimization is carried out for estimating the principal axis to the quadric. The optimization is also used when the
deviation among the quadrics principal axis grows smaller than the threshold defined. The second call helps in
springing back the ellipsoid to sphere when it comes out of the concave or convex features.
4. RESULTS
The Modified Intermediate Skeletal Model algorithm suggested has been implemented in Matlab. Two case studies
are presented here to illustrate the efficacy of the algorithm for constructing the skeletal model. The two studies are
based on simulated data sets in which the original cloud data are generated from the freeform surface. The data points
5
for first freeform surface are structured while data points for the second model are unstructured and are obtained by
triangulated the solid model generated and extracting the coordinates of nodes of the triangles obtained during
triangulation.
It can be seen that the approach suggested captures the curvature information aptly. The points that are separated far
apart are not captured pretty well. The overall skeletal model has lost some connectivity information where the data
points are separated by a large Euclidean distance.