3DF Wiki Guide
3DF Wiki Guide
User Guide
Release 0.51
A tool for processing of point clouds acquired by terrestrial laser scanning in forests
The Silva Tarouca Research Institute, Pub. Res. Inst.
Department of Forest Ecology.
Authors: Michal Petrov, Martin Krůček, Jan Trochta, and Kamil Král
©2020
Table of contents
3D Forest 1
User Guide 1
Home 4
1. About Application 5
2. Detectable Attributes 6
3. Installation 7
5. Basic Workflow 14
6. Setting a Project16
7. Terrain analysis 18
8. Vegetation 21
9. Trees 24
10. Crowns 29
14 References 38
Home
Welcome to the 3D Forest wiki/user guide. All information about the installation, usage, and
methods implemented within the 3D Forest application can be found here. For more info see
the sidebar/table of contents.
1. About Application
The 3D Forest is an application created to process terrestrial laser scanning (TLS) data, point
clouds, and to gain detailed information about forest stands and individual trees.
The application is released under the terms of the GNU General Public License v3 as
published by the Free Software Foundation. For more info about the license, one can view the
LICENSE.txt file located in the program folder or on the web page:
http://www.gnu.org/licenses/.
The application is written in C++ and depends on the libraries: VTK, PCL, Eigen, Boost,
Flann, LibLAS, and Qt. For a successful installation, users have to build these libraries before
compiling the 3D Forest. The source code is provided for downloading and compiling the
application on any computer. The Windows installer, as well as trial data, are available on the
site: www.3dforest.eu.
2. Detectable Attributes
In the current version of the 3D Forest (0.51) the following characteristics can be calculated:
Tree Attributes
Position: gives X, Y, Z coordinates of the tree base in the Cartesian coordinates
system. More information in section Trees.
DBH: this attribute determines diameter at breast height, i.e. the tree stem diameter
calculated from a sub-set of points from 1.25 to1.35 m above the tree base. See also:
least squares regression and randomized Hough transformation.
Height: vertical distance between the tree base and the highest point of the tree, (i.e.
the max. difference in Z coordinate in meters).
Cloud length: gives the longest distance between two points in the cloud. It is suitable
for the length calculation of highly inclined or lying trees.
Stem Curve: determines the stem centers and diameters calculated in various heights
above the tree base (0.65m, 1.3m, 2m, 3m, etc.).
Convex planar projection of the tree: returns polygon with the shortest boundary
containing all points of the tree cloud orthogonally projected on the horizontal plane.
Concave planar projection of the tree: returns polygon with the smallest area
containing all points of tree cloud orthogonally projected on the horizontal plane.
The number of tree points: yields the number of points representing a single tree.
Crown Attributes
Crown centroid: gives coordinates of the tree crown center position (X, Y, Z) in the
Cartesian coordinates system as computed from crown external points. More
information in section Crowns.
Crown position deviation: returns the crown position deviation from the tree base
position. It is defined by distance (m) and direction (°).
Crown bottom height: provides the vertical distance (i.e. the difference in Z
coordinates in meters) between the tree base position and height of the place where the
lowest living branch attaches the main stem.
Crown height: computes the vertical distance (i.e. the difference in Z coordinates in
meters) between the crown bottom height and its highest point.
Crown total height: gives vertical distance (i.e. the difference in Z coordinates in
meters) between the lowest and the highest point of the tree crown.
Crown volume by voxels: returns crown volume computed from voxels of a given
size.
Crown volume and surface area by concave polyhedron: computes crown volume and
surface area utilizing cross-sections of a given height and concave hull threshold
distance.
Crown volume and surface area by 3D convex hull: returns volume and surface area of
crowns 3D convex hull.
Crown intersections: gives volume and surface area of intersecting space between
convex hulls of two crowns.
3. Installation
WIN
The 3D Forest installer for Windows is located on https://github.com/VUKOZ-
OEL/3DForest/releases/download/v0.5/3DForest_05.exe. Once downloaded, the installation
wizard will guide you through. Furthermore, sample data to test the 3D Forest are placed on
https://github.com/VUKOZ-OEL/3dforest-data. For uninstalling run the uninstall.exe file
located in the installation folder.
For installation from the source, it is important to install dependencies first. It is also important
to have compilator, IDE and GIT installed.
FLANN
Even you installed Flann library in previous step, you will need to use different version in vcpkg
directory try to find all versions vcpkg x-history flann and try to use version 1.9.1 git
checkout c626675abb963d15f5d290a56005556d95b160bd -- ports/flann after this
remove flann vcpkg remove flann and install back vcpkg install flann
VTK
Since vcpkg has some isssues with vtk versions, vtk should be compiled separatelly from
source.
macOS, Linux
For the macOS and Linux users, there is only an option to download and compile the source
code.
At first, dependencies are installed using brew install boost vtk qt flann Eigen
LibLAS Cmake. The second step is to compile PCL from the source as 3D Forest uses
unapproved changes in code. It can be downloaded from
https://github.com/janekT/pcl/tree/pointpicking using git or as a zip file. Cmake creates a
project for Xcode. The PCL Library has to be compiled and installed then.
The 3D forest can be downloaded from the repository https://github.com/VUKOZ-
OEL/3DForest/tree/master. CMake finds all dependencies for the configuration of the
project. The 3D forest can be run via Xcode.
Tree and crown toolbar. If not coloured, values must be computed first
Main menus using the main menu.
Show/hide icons. They serves for refreshing visualized information
Project icons after recounting too.
Tree widget
Visualisation
toolbar
Keyboard
p, P: switch to a point-based representation
w, W: switch to a wireframe-based representation (where available)
s, S: switch to a surface-based representation (where available)
+/ -: increase/decrease overall point size
g, G: display scale grid (on/off)
u, U: display lookup table (on/off)
r, R: reset the camera
f, F: focus on the point
x, X: selection (in selection mode)
o, O: perspective (on/off) Alt +: show and allow quick access keys in the menu (underlined
character)
Tips: Switching between wireframe and point-based representation does not affect the point
clouds. The correct values of the scale grid are shown only when the perspective is off. The
display lookup table shows values extent and color range if Color by field is used.
Cloud type description
All data imported into the 3D Forest are stored in the PCD file format (binary compressed
PointCloudData). PCD file contains coordinates of all points with intensity value. These four
variables of the point representation are being used for the moment. Additional information
such as color or path to the transformation matrix file can be found in the MyProject.3df file.
There are different point cloud representations during 3D Forest data processing:
Base cloud:
Represents raw data imported into the 3D Forest. The data should be preprocessed in some
other software (usually provided with the scanner), where cloud fitting and registering are
made. The base cloud is not differentiated. It embodies points of all objects such as terrain,
vegetation, buildings, etc.
Figure 2: Example of the base cloud before any segmentation – all points contain just X, Y,
Z, and intensity values; no objects or surface types are differentiated.
Terrain cloud:
Using Terrain analysis the base cloud can be divided into vegetation and terrain – two main
parts of the forest ecosystem. The terrain cloud represents ground surface and can be used for
better calculation of tree position or exported into GIS software for detailed
(micro)topography analysis.
Figure 3: Visualization of Terrain cloud (brown) and Vegetation cloud (green) after
ground/vegetation separation.
Vegetation cloud:
The part of the base cloud which is not terrain is labeled as vegetation. From this point cloud,
the individual trees can be segmented automatically or manually selected so to create another
cloud type – tree cloud.
Tree cloud:
This cloud represents a single tree after manual segmentation. Only for tree clouds, it's
possible to compute basic tree variables like DBH, height, position, etc.
Figure 4: Visualization of Tree clouds after tree segmentation – individual trees are displayed
in different colors.
Other:
Clouds that represent unclassifiable points or points which do not belong to any other cloud
type. These clouds can be displayed or used as vegetation for tree selection/segmentation.
5. Basic Workflow
Once the application is installed, the workflow is to set up a project, import data, and segment
data to terrain, vegetation, and trees. Only then the tree and crown features can be evaluated.
Visual Check
It is important to check the cloud quality after the import. Mainly if there are any doubts
during setting-up a project. If the cloud looks like parallel lines, the transformation matrix is
wrong. Then the project has to be recreated with a different (appropriate) transformation
matrix.
Project Opening
A project can be opened via Project→ Open Project, or click on the Open Project icon in
the project context menu and select your project file (file with extension .3df).
Project Import
Since projects save their path, simple copying into another location (disc or directory) would
break the project. If such a change is needed, it has to be done as an import of an old project
into a new location. Go to the Project → Import Project. Choose the path to the old project,
set the name of the new project and location where the new project will be created. The new
folder with the project name will be created and all data files will be moved there. It is also
possible to select the option for removing the old project from disk after the import.
Data Import
The 3D Forest enables to import data into a project via Project → Import. It is possible to
import data in following formats: .txt, .xyz, .las, .pts, .ptx. If the user's data already have the
3D Forest native format (PCD), it is important to choose an appropriate cloud type (see Cloud
type description) and to set the transformation matrix. Once the data are imported, they are
listed at the main window and in the list of project clouds.
7. Terrain analysis
There are two automatic methods for terrain processing. None of these methods are perfect, so
it is advised to use both of them to combine the advantages of their outputs. There is also a
possibility of manual adjustment of the results.
Manual adjustment
The above-described terrain extraction can be also adjusted by manual editing. Go to Terrain
→ Manual adjustment and select terrain cloud for editing. Set a name for a new cloud where
points removed from the terrain cloud will be saved. Pressing the "x" button activates the
selection box. The left mouse button makes the selection. The designated points will be
removed from the terrain file and stored in the separate file defined above. For a step back
press the undo icon → in the top panel . There is also a split function that creates user-
defined strips of terrain for easier point selection. Arrows switches among the strips. The strip
selection is finished by clicking at the split function again. Stop EDIT icon saves all
changes in the original (input) terrain cloud.
IDW interpolation
Areas with missing terrain points (typically shaded by thick trees or located right under the
scanner during scanning) may be filled in by Inverse Distance Weighted interpolation (IDW)
from the surrounding terrain points. Go to Terrain → IDW to select an input terrain cloud for
interpolation. Set a resolution of interpolation in cm, a number of the closest points (n) to be
included in the interpolation. Name the new interpolated terrain cloud that will be free of
empty areas. The interpolation uses n-closest points of the original terrain to estimate Z value
of the new terrain point.
Slope
The slope analysis is a method that evaluates the terrain slope. This function is reachable via
Terrain → Slope. The terrain slope, in general, is defined as the first derivative of the terrain.
The slope is thus calculated from the height difference based on a user-defined sector. Users
can choose between the sector defined by the nearest neighbors or the radius. Each point of
the selected terrain cloud is then compared to the others within the defined sector and the
mean slope value is computed. The result is stored as a field of intensity in degrees or in
percent, where 45 deg represents 100%.
Aspect
The aspect is a method for evaluation of the slope orientation for a given terrain layer. There
is defined surrounding for each point of the terrain layer where the average orientation is
calculated. The result is given in degrees, i.e. -180 to -135 = West, -135 to -45 = South -45 to
45 = East 45 to 135 = North 135 to 180 = West. The result is stored as a field of intensity.
Hillshade
This function returns shaded relief of a given layer of terrain. The values of the resulting
cloud range from 0-255 according to the azimuth and orientation.
Terrain Features
This is a method that looks up identical places in the field based on their parameters. Identical
places are defined by their value in the field of intensity, the number of points, the size of the
major axes, the axes ratio, the size of the maximal area (the concave), the convex, and the
convex/concave ratio. The result is not saved to disk nor project as there is the Export
Features function to do it.
Feature table
The feature table displays a table of attributes for each terrain feature.
Export Features
This function exports the terrain feature information. The user selects the directory to save the
file, delimiters of fields, and files to export. There is an option to choose between an attribute
file and Convex/concave polygon. The attribute file is a file with the position of the centroid
of selected voxels and its attributes. Convex/concave polygons are polygons delimiting voxels
of given features. These are saved in the VKT format.
8. Vegetation
After the terrain extraction, there are two types of clouds in the project: the terrain and the
vegetation clouds. However, the vegetation cloud needs to be further segmented into single
trees to get the desired information about tree attributes. Right now there are two methods
implemented in the 3D Forest, automatic and manual segmentation.
Automatic segmentation - version 0.5 and higher
The automated segmentation is called via Vegetation → Automatic tree segmentation. There
are ten inputs needed to start. The vegetation and terrain clouds of interests are obvious. The
voxel size, descriptor type, descriptor threshold value (%), amount of iterations, number of
voxels in element, and the distance from terrain are explained in detail in the next paragraphs,
as well as the method itself. The cloud prefix and non-segmented points output name define
the output.
The algorithm used in version 0.5 and higher is based on searching neighboring voxels
according to the chosen descriptor. At first, voxels of a given size (voxel size in cm) are made
through the whole vegetation cloud. Each voxel is then evaluated by user-defined descriptor:
Principal Component Analysis (PCA), slope, intensity, and PCA-slope multiplication.
PCA computes Principal Component Analysis of x,y,z coordinates of all points inside each
voxel. The ratio of different PCA axes is then used as the descriptor. Based on our
experiences, the PCA descriptor usually gives the best segmentation results for TLS data.
The threshold value is defined as a percentile of the descriptor value. The range is 0-100%
and represents the lower limit of the used voxels for segmentation. For example, if the input
threshold value is 70 then only the voxels whose descriptor value is at 70% of the actual value
range or higher are used for the tree extraction. The higher the threshold is, the more similar
and compact surfaces are represented. The ideal values of the input parameters are higly
dependent on the point density and structure of the forest stand (see Tips below this
paragraph). Resulting representations are the surfaces of a stem or big branches.
All voxels above the descriptor threshold value and within the voxel size are then grouped.
This means that these groups of voxels contain points of similar topological characteristics
and the voxels are neighbors (connected without interruption). Such groups of voxels are
called elements if they satisfy the condition of a minimal number of voxels in an element.
Within these elements, the tree bases are classified by complying with the given distance
above terrain points.
As branches or stems may be interrupted by occlusions bigger than the voxel size and/or small
sprigs do not always fill the descriptor threshold value, it is needed to add more voxels to
make the tree complete. The established tree bases are thus gradually connected with voxels
forming the rest of the tree parts during the iterative process. This allows us to find all free
voxels that are onward from the voxel size.
The iterative method is automatic and has two parts. It starts by omitting the descriptor
condition. Thus all voxels that are up to the voxel size are considered as belonging to the tree.
The surrounding of every added voxel is instantly searched for the neighbors. Once all
neighbors are added, the voxel size is increased by its value, i.e. plus the voxel size, and the
new surrounding of established tree bases are searched again. If the free voxels or elements
within this space have the descriptor value up to the threshold they are added to the tree.
Afterward, all free voxels laying within the doubled voxel size are added to the enlarged tree
as well as their neighbors. These two steps are repeated iteratively according to the number of
iterations. The appropriate number of iterations thus depends on your data quality and forest
characteristics. The data with minor occlusions can be segmented well with few iterations. On
the other hand, too few iterations can lead to omission errors (missing treetops), too many
iterations can lead to commission errors, especially in very dense forests.
When the iteration is finished, segmented trees are saved into the project folder as separate
clouds with a given prefix and number of the tree. All points that are not selected as a tree are
saved in vegetation as a “rest cloud”.
Tips:
Voxels should have a size to incorporate at least 3 points from the point cloud. If the
point cloud is dense enough, a smaller voxel size can be used and vice versa.
For dense forest stands smaller voxel size is advised to maximize the number of
segmented trees. On the other hand, with small voxel size trees might not be complete
after segmentation.
For sparse/even forest stands larger voxels can be used.
If neighboring trees are merged after segmentation, try to increase the threshold of the
descriptor value or/and use smaller voxels.
If you want to avoid segmentation of small trees increase the distance from terrain and
voxel size.
It is better to use fewer iterations for deciduous trees scanned in the leaf-off state
(lower occlusion is anticipated). On the contrary, for coniferous (evergreen) trees
usually, more iterations are required due to occlusions in crowns.
In some cases, two iterations of segmentation can be useful, first run the segmentation
with finer parameters. Then segment the rest-of-vegetation point cloud with coarser
parameters.
Figure 5: Differences between the DBH computed according to the tree base position
estimated from the tree’s lowest points (left) and the DBH recomputed according to the tree
base position adjusted to the terrain cloud (right). The tree base position is represented by the
red dot.
DBH by Randomized Hough Transformation
Computing DBH using randomized Hough transformation is done via Trees -> DBH RHT.
The dialog window gives the option to select trees of interest and set the number of iterations.
As always, the amount of iterations is a trade-off between computation time and accuracy. It
is recommended to use at least 200 iterations for fast computation. A higher amount of
iterations costs more computational time with rising accuracy. The use of 2000 iterations
already provides fairly consistent results. Resulting DBH is displayed as a 10 cm high
cylinder with the diameter of the best-fitted circle.
The method itself is based on the parametric description of objects within the polar coordinate
system. The DBH subset of the tree cloud (i.e. from 1.25 to 1.35 m) is projected to a
horizontal plane. The Z coordinates are transformed to 1.3m. Then, the scheme searches every
possible center of the circle for each point of the subset. The most frequent circle center is
selected as a result (Xu and Oja, 1993).
Figure 6: Differences between the RHT (left) and the LSR (middle and right) method; the
wrong value given by LSR is caused by an overhanging branch, which was included in an
automatically defined DBH subset of the tree cloud.
Tree Height
To obtain the tree height go to Trees → Height. There is an option to select trees of interest in
the dialog window. The result is displayed in meters on the top of the tree. This tool also
displays a vertical line from the tree base position to the highest point.
Tree Length
The tree or cloud length calculation is done via Trees → Length. The dialog window gives an
option to select tree clouds of interest. The tool calculates Euclidean distance between the
most distant points in the tree cloud and displays their connection. The cloud length is
displayed in meters at the bottom of the tree. This tool is suitable for the calculation of the real
length of inclined or lying trees.
Figure 7: Planar projection of the same tree, convex (red), concave (blue).
Stem Curve
The stem diameters within 1 m intervals along the tree length can be received via Trees →
Stem Curvature. The stem diameters are computed as circles by Randomized Hough
transformation (see DBH by randomized Hough transformation) from 7 cm high slices of the
tree cloud. They are displayed as 7 cm high cylinders defined by the RHT fitted circles. The
number of RHT iterations is set by the user. The algorithm starts with the stem diameter at
0.65 m above the ground, then at 1.3 m and 2 m above the ground. Then it continues
computing diameters with 1 m spacing until the new diameter is two times wider than the
previous two diameters, i.e. the calculation is terminated when branches forming the crown
got involved in the calculation. Because numerous circles are fitted to each tree, the higher
number of RHT iterations significantly increases the computational time. Though, for higher
accuracy more iterations are recommended. The output of the stem curve calculation is x,y,z
coordinates of the centers of fitted circles and the diameters itself. See also Export Stem
Curve in the chapter Data Export.
10. Crowns
The crown attributes (Fig. 8) can be evaluated if the tree position and tree height are
computed. Crown height, crown base height, crown width, crown centroid, and crown
position deviation are then computed automatically. These variables are recomputed
automatically if the tree position is changed.
Figure 8: A - The description of the crown attributes: crown height (CH), crown base height
(CBH), crown center (CC), crown total height (CTH), crown length (CL), crown width (CW).
B – The example of calculated parameters and the deviation of the projection of the crown
center (green) from the base of the trunk (blue). C – The concave planar projections and
volume/surface of the crown as calculated according to the concave polyhedron. D – The
convex planar projections and crown volume/surface calculated using a convex envelope. E –
The example of the visual output of the 3D Forest when calculating the shared space of crowns
(yellow) using 3D convex covers.
Crown intersections
The crown intersections evaluation is available via Crowns → Intersections. All existing
crowns are tested for intersections. The existing intersection between two crowns is computed
using VTK as Boolean AND in 3D space. The convex shapes are only possible for the
evaluation thus to create the 3D objects volume and surface by 3D convex hull have to be
computed first. The following variables of intersecting crowns are computed: horizontal angle
(azimuth), vertical angle (from horizontal plane), and distance from crown position to
intersection center of gravity, intersection volume, and surface area. These attributes are listed
in the intersections table, called by icon . Intersecting parts can be shown/hidden by icon
, the lines connecting crowns positions with intersection center of gravity, and angles are
also visible.
11. QSM Models and Tree assortments
The Quantitative Structure Models (QSM) menu offers methods to obtain information about
branches and stem (volume, length, branching order) and methods for tree analysis (other
quantitative parameters of trees). The QSM models may be employed if a tree position and
height are known.
Tree Reconstruction
Tree reconstruction is the first step of the QSM analysis.
The input parameters are the tree of interest, voxel size, and multiplicator that defines the
close neighbor, i.e. maximal distance of used voxels. The reconstruction starts by dividing the
whole tree into voxels. The points that are inside the voxel, as well as points and voxels that
are close neighbor (voxel size * multiplicator), are recognized for each voxel. The stem
reconstruction starts by neighbor search at the lowest voxel. All neighbors connected in one
piece into the homogeneous part are considered as segments. If neighbors create two or more
groups of non-connected voxels they are classified assible branches. The segment is closed at
that place and the analysis continues up to the tree length. Finally, the information about
voxels, ordering, i.e. parent, and child succession of segments, is known for each segment.
Once the whole tree is divided into the segments, the tree reconstruction itself follows. Firstly
all segments without children are said to be the ending segments. Parents of each of the
ending segments are followed to the lowest voxel and their lengths are computed. The stem is
the longest sequence of segments. These segments are merged into a single segment and all its
children are assigned. The reconstruction of branches of the first order follows with similar
semantics until it reaches the stem segment. After all first-order branches are reconstructed,
branches of second, third, etc. order follow until all segments are assigned to the tree.
A B C D
Figure 9: A – unprocessed tree point cloud, B – results of tree reconstruction: stem (red) and
branches (green to blue according to branching order), C – cylindrical model of a tree, D –
wood assortments.
QSM model
After the tree reconstruction, the parametric model of the tree can be evaluated. First of all the
tree of interest has to be specified. Since the cylinder estimation is done by randomized hough
transformation (as for DBH) the number of iterations has to be set then. The size of the
estimated cylinders defines the height of the cylinders used for reconstruction. Reconstruction
can be further restricted to the branches up to given order, given length, or given diameter.
The last option is the stem profile evaluation checkbox. This result can differ from those
given by the algorithms offered in the “tree menu”. The final results of the QSM
reconstruction appear as connected cylinders for each branch with an attribute table
containing information about volume, length, and order if each fitted cylinder.
Tree assortments
The final step is to estimate possible wood assortments (as the base for timber value
estimates) according to Czech industry standards for wood assortments. The only input is the
trees of interest. The evaluated attributes of the tree are convergence, skewness, diameter,
length, and number of connected branches. The algorithm classifies the whole tree
represented by cylinders into quality classes, higher quality classes are prioritized.
2. up to 1 cm 2 cm/m 28 cm 3m up to 3cm
3. 3 cm/m 20 cm 2,5 m up to 4 cm
4. 6 cm/m 7 cm 2m
5. 10 cm/m 7 cm 1m
6.
12. Data Export
All data exported from the 3D Forest are re-transformed into their original coordinates
system.
Clouds Export
The cloud data are exported via Project → Export. There is an option to choose an output
format (.txt, .ply or .pcd). Then the cloud for export, the name, and the location of the new file
are selected.
Cloud Subtraction
The cloud subtraction is done via Other features → Cloud subtraction. The dialog window
offers the option to select the point clouds to be subtracted and set the name of the output
cloud. The tool subtracts the smaller cloud from the bigger cloud. Identical points of both
clouds are removed from the bigger cloud and the result is saved as a new cloud.
Voxelize Cloud
The reduction of the point cloud density is possible via Other features → Voxelize cloud. The
dialog window gives the option to select the cloud of interest, set the name of the voxelized
point cloud, and the size of the voxels in cm. The tool generates a voxelized point cloud from
the input by centroids of voxels that included at least one point of the original point cloud.
Change Background-color
The background color of the viewer can be changed via Other features → Change
background color. Then select a new color.
14 References
Chernov, N. and C. Lesort (2003). Least squares fitting of circles and lines. arXiv preprint cs/0301001.
McDonald, J. (2014). The Hough Transform-Explained and Extended. [online] cited 17.7.2014. Available
at: www.cis.rit.edu/class/simg782.old/talkHough/HoughLecCircles.html.
Rosén, E., E Jansson and M. Brundin (2014). Implementation of a fast and efficient concave hull
algorithm. Available at: http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/
Project-10-report.pdf. Project Report. Uppsala University.
Rusu, R. B., S. Cousins and Ieee (2011). 3D is here: Point Cloud Library (PCL). 2011 Ieee International
Conference on Robotics and Automation (Icra).
Xu, L. and E. Oja (1993). Randomized Hough transform (RHT): basic mechanisms, algorithms, and
computational complexities. CVGIP: Image understanding 57(2): 131-154.
Statisticall outlier removal filter:
https://pcl.readthedocs.io/projects/tutorials/en/latest/statistical_outlier.html#statistical-outlier-
removal
Radius outlier removal filter:
https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html#remove-outliers