0% found this document useful (0 votes)
12 views

3D Edge Detection and Comparison

Uploaded by

ljubljana9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

3D Edge Detection and Comparison

Uploaded by

ljubljana9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022

Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

3D EDGE DETECTION AND COMPARISON USING FOUR-CHANNEL IMAGES

Thodoris Betsas 1 *, Andreas Georgopoulos 1


1
Laboratory of Photogrammetry, School of Rural-Surveying and Geoinformatics Engineering NTUA, Athens Greece;
[email protected], [email protected]

Technical Commission II

KEY WORDS: 3D Edge Detection, Point Cloud Segmentation, 3D Edge Comparison, SfM, MVS

ABSTRACT:

Point cloud segmentation, is a widespread field of research and it is useful in several research topics and applications such as 3D
point cloud analysis, scene understanding, semantic segmentation etc. Architectural vector drawings constitute a valuable platform
source for scientists and craftsmen while the production of such drawings is time-consuming because many of the creation steps
are done manually. Detecting 3D edges in point clouds could provide useful information for the automation of the creation of 3D
architectural vector drawings. Hence, a 3D edge detection method is proposed and evaluated with a proof-of-concept experiment
and another one using a professional software. The scope of this effort is twofold, firstly the production of semantically enriched
3D dense point clouds exploiting four-channel images in order to detect 3D edges and secondly the comparison of the detected 3D
edges with their corresponding edges in a textured 3D model. Comparing 3D edges in the early step of the 3D dense point cloud
production and in the final step of 3D textured mesh, provides useful conclusions of the data used for the automatic creation of 3D
drawings. Both of the experiments i.e., the proof-of-concept and using the professional SfM-MVS software were conducted using
real world data of cultural heritage objects.

1. INTRODUCTION matically. Thus, the architectural vector drawings in 2D space


can be easily replaced by orthophotos and orthophotomaps be-
The architectural vector drawings are the most commonly used cause they contain both the visual and measurement inform-
products in several fieldwork cases like construction, conserva- ation. However, most of the users of 2D architectural vector
tion, restoration and documentation as well as in many scientific drawings still prefer the traditional architectural vector draw-
fields like archaeology, architecture and surveying. Although, ings than orthophotomaps and thus an automation of the pro-
creating architectural vector drawings, especially in 3D space, cedure producing them, will be beneficial for the community.
constitutes a labor-intensive process, the scientific community
has not provided an automated approach for the production of In this effort, a 3D edge detection method is proposed, exploit-
such drawings yet. In fact, the automation of the creation of ing manually generated 2D edge semantic information. The
drawings is a complex task which would encapsulate a plethora main idea of this effort (Figure 1), which is the production of se-
of steps such as edge detection in multiple scales, edge vector- mantically enriched dense point clouds using four-channel im-
ization, topology check of the vectorized edges, among others. ages, was evaluated by implementing a simple experiment as
proof-of-concept. More precisely, two scripts were developed
Nowadays, 2D-3D architectural drawings using photogrammetry for the experiment, the first script checks the radiometry of
are created manually, especially in the cultural heritage domain the created four-channel images, with respect to the original
in which the objects of interest are commonly characterized by RGB images and the original edge semantic information and
complex surfaces. Useful photogrammetric products for the the second script uses the principles of epipolar geometry for
production of architectural drawings are the orthophotograph- the production of a non-refined 3D semantically enriched point
ies usually produced by ortho projecting the textured 3D model cloud. Moreover, another experiment using professional SfM-
created using the conventional photogrammetric pipeline. Af- MVS software, in combination with four-channel images, was
terwards, the orthophotomap is manually vectorized using a conducted to create an improved semantically enriched dense
CAD environment to produce 2D vector drawings. The pro- point cloud, to compensate for the weaknesses of the first ex-
duction of 3D vector drawings is conducted manually by vec- periment. Finally, the 3D edge points were detected by classi-
torizing the 3D model or the 3D dense point cloud. In fact, fying the 3D points into edge and non-edge points with respect
the state-of-the-art process for both the creation of 2D and 3D to their label value. Apart from the 3D edge detection a 3D
vector drawings exploiting photogrammetric products, is time- edge comparison between the detected 3D edge points and the
consuming, laborious and it also requires specialists from sev- corresponding 3D edges of a georeferenced textured 3D model
eral scientific fields, e.g., surveyors and architects, during the was made, with several conclusions regarding to the 3D posi-
entire process. tion and the length of each edge.

In fact, orthophotos and orthophotomaps are raster data which The rest of the paper is structured as follows: Section 2 presents
combine the image’s visual information with the ability to per- a brief review of the literature. Section 3 describes the proposed
form measurements on them. Additionally, many of the steps approach and section 4 presents the conducted experiments.
required for the creation of orthophotos are performed auto- Section 5 comments on the results of the method. Finally, sec-

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 9
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022
Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

Figure 1. The general idea

tion 6 draws some conclusions for the proposed approach and is segmented into planes, using the region growing and merging
presents some ideas for future work. method. Afterwards, the 3D points of each fitted plane, are pro-
jected onto it, to create images. Finally, a 2D contour detection
algorithm is applied and the detected 2D contours are projected
2. RELATED WORK
back into 3D space. Additionally, Dolapsaki and Georgopoulos
2.1 2D Edge Detection (2021), proposed a 3D edge detection method which exploits
the relationships of analytical geometry and the properties of
Edge detection is a fundamental computer vision problem es- planes in combination with digital images. More concretely,
pecially in 2D space and various traditional edge detection op- the desired edge is firstly detected in a digital image of known
erators, such as Sobel (Sobel et al., 1968), Prewitt (Prewitt et exterior orientation, then the plane on which both the 2D edge
al., 1970), Scharr (Kroon, 2009), Kirsh (Kirsch, 1971), Marr- and the perspective center of the image lay, is defined. Finally,
Hildreth (Marr and Hildreth, 1980) and Canny (Canny, 1983, the desired 3D edge points inevitably lie on the same plane and
1986), are proposed in the literature. Moreover, a few meth- thus are detected.
ods propose deep learning architectures to detect 2D edges with
better results than the results of traditional approaches (Xie and Detecting edges in 3D space can be performed using 2.5D data
Tu, 2015; Poma et al., 2020; Su et al., 2021). Segmentation e.g., range images, depth images. Bao et al. (2015) proposed an
techniques aim to cluster the pixels or points into groups with approach which firstly creates range images from a given point
similar characteristics (geometric, spectral) without taking into cloud, then applies canny operator on them and finally projects
account semantic meaning (Xie et al., 2020). Image edge detec- the 2D edges into 3D space. Alshawabkeh (2020) was cre-
tion techniques inspire the development of edge-based 3D point ated structured depth images from LiDAR point cloud and was
cloud segmentation approaches (Xie et al., 2020). combined them with RGB images to construct RGBD images.
The goal of the proposed approach was to detect 3D cracks on
Several edge-based segmentation methods are presented in the Treasury Monument of ancient Petra in Jordan. This, was
the literature (Wani and Arabnia, 2003; Senthilkumaran and achieved by detecting 2D linear features on RGBD images and
Rajesh, 2009; Xie et al., 2020). Although there is a plethora of projecting them in 3D space.
available automatic image edge detection methods, in this effort
the edge semantic information is created manually for accuracy
3. METHODOLOGY
purposes analyzed in Sections 3 and 5. Besides, the scope of
this effort is to develop a 3D edge detection method exploiting
the 2D edge semantic information and not to identify the best Digital images are commonly used for the documentation of
automatic 2D edge detection method. monuments and in general the documentation of cultural herit-
age objects. The state-of-the-art photogrammetric pipeline in-
2.2 3D Edge Detection cludes, image alignment, depth maps generation, dense point
cloud production, 3D model generation and texturing. Addi-
Methods presented in the literature detect 3D edges using sev- tionally, several steps which clear the point clouds from noise
eral techniques e.g., model fitting, normal vectors, analytical and correct the surface of the 3D model are implemented during
geometry etc. To be more specific, Nguatem et al. (2014) use a post process procedure. The final product, which is a textured
predefined templates of windows and doors in order to detect 3D model is useful in many applications such as digital mu-
their 3D boundaries exploiting plane intersection. Mitropoulou seums, 3D documentation etc.
and Georgopoulos (2019) firstly segment 3D point clouds into
planes and then detect the 3D edge points applying plane in- In this effort, the available RGB images are enriched with an
tersection. Moreover, Bazazian et al. (2015) firstly find the k- additional to the RGB, channel in which edge semantic inform-
nearest neighbors of 3D points. For each group of 3D points the ation is presented. The edge semantic information is produced
covariance matrix is calculated. Finally the eigenvalues and ei- manually by annotating the given images in a drawing envir-
genvectors of each matrix are examined to detect the sharp 3D onment. The four-channel images are passed into a 3D point
edges by deciding whether the point lays on a plane or on an cloud production algorithms to create a semantically enriched
edge. The proposed approach was evaluated on both synthetic point cloud. Finally the 3D edges are detected by identifying
and real-world data. Lu et al. (2019) are also exploit the eigen- the points with a specific label value. The previously described
values and eigenvectors of the calculated covariance matrix of steps are displayed in Figure 2. The scope of this effort is to
the points’ neighborhood, but the neighborhood is defined using contribute to the automation of the production of 2D-3D ar-
a region growing and region merging iterative approach in con- chitectural vector drawings and not to simply detect 3D edges.
trast to kNN algorithm. To be more specific, the 3D point cloud Thus, a 3D edge comparison between edges in point clouds and

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 10
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022
Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

3D meshes is conducted. Besides, the creation of architectural


vector drawings automatically is still an open issue. Each step
of the proposed approach, is thoroughly reviewed into the fol-
lowing sub sections.

(a) Manually annotated RGB images.

(b) Reversed binary edge maps.

Figure 3. Manually annotated RGB images and binary edge


maps
Figure 2. The proposed method

choose the image pairs and match each of them using a Flann
3.1 Labels Creation and Image Enrichment based matcher (Muja and Lowe, 2009). Image pairs selection
is implemented with respect to the number of the given images,
First and foremost, the edge semantic information must be gen- for instance for five images (1, ..., 5) ten image pairs are con-
erated to enrich the source data i.e., the digital images, with an structed i.e., (1-2, 1-3, 1-4, 1-5, 2-3, 2-4, 2-5, 3-4, 3-5 and 4-5).
extra channel. In this paper, the edges are defined manually by Then, (iii) calculate the essential matrix exploiting the detected
annotating them on the digital images using red color. Figure points and the camera matrix or the fundamental matrix, if the
3a depicts six out of the seventy six images used during this in- camera matrix is not available. The camera matrix is calculated
vestigation. The labels must be one channel images to enrich using the exif data of each image, from which the focal length
the RGB images. Thus, the RGB manually annotated images and the principal point, are retrieved. If the principal point is not
were transformed to binary edge maps, by identifying the red available, it is defined as the center of the image. Afterwards,
pixels on them. A part of the created edge maps are displayed in (iv) calculate the rotation and translation matrix of one image
Figure 3b. Afterwards, each RGB image was enriched with the of the image pair, with respect to the other image. Finally, (v)
corresponding binary edge map using the Python version of the calculate the projection matrix, and generate a semantically en-
OpenCV (OpenCV, 2022) library and specifically the ”merge” riched point cloud for each image pair i.e., ten different point
method and it finally stored as pseudo ”.tiff” images. clouds.
3.2 3D Point Cloud Creation
Producing point clouds using a simple triangulation process
Creating semantically enriched point clouds is useful for many leads to several problems. A thorough review of the quality of
applications, such as to improve the SfM-MVS procedure the point clouds, using the ”Triangulation” algorithm is presen-
(Stathopoulou et al., 2021). In this paper the semantically en- ted in Section 5. The most important problem is the produc-
riched point cloud is produced exploiting the four-channel im- tion of a non-refined point cloud since the bundle adjustment
ages produced using the RGB images and their edge maps as step is omitted. Additionally, the ”Triangulation” algorithm is
described in 3.1. Two methods were developed for the produc- computationally inefficient. Thus, professional SfM-MVS e.g.,
tion of the semantically enriched point cloud, the ”Triangula- Agisoft-Metashape, was combined with the four-channel im-
tion” and the ”Agisoft-Metashape” (Agisoft-Metashape, 2016). ages, for the production of a semantically enriched dense point
cloud. Agisoft-Metashape has the ability to create semantically
Triangulation algorithm uses the python version of the Opencv enriched point clouds by default since it can process multis-
library for scene reconstruction. To be more specific, the ”Tri- pectral image. Additionally, Agisoft-Metashape is one of the
angulation” algorithm simplifies the complex steps, of an SfM- leading photogrammetric software so it guarantees that the se-
MVS algorithm, into the standard two-image epipolar geometry mantically enriched point cloud is produced exploiting state-
problem. The major steps of the proposed algorithm are (i) Ex- of-the-art techniques. The semantically enriched dense point
tract images’ features using one of the available algorithms i.e., clouds produced using the ”Triangulation” algorithm as well as
Akaze (Alcantarilla and Solutions, 2011), SIFT (Lowe, 2004), the Agisoft-Metashape software are depicted in Section 4, in
SURF (Bay et al., 2006) or ORB (Rublee et al., 2011), (ii) which both experiments are presented.

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 11
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022
Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

3.3 3D Edge Points Classification

During this effort, 3D edge detection, is conducted using a bin-


ary classification procedure. Producing semantically enriched
dense point clouds means that except for the geometric and
color information, there is also semantic information into the
saved file. In this effort, edge semantic information is passed
into the final file, which is stored in various formats such as
”.ply”, ”.txt” and ”.pts”, after the color values. Each 3D point
is associated with a label value ranging from 0 to 255 which
represents the non-edge and edge points, respectively. Finally,
3D edge points are detected by separating the 3D points with la-
bel value greater than an empirical threshold value. In fact, the
edge maps used as the forth channel into RGBL images, con-
tain only 0 and 255 values, but the produced ASCII files of the
semantically enriched point clouds contain label values ranging
from 0 to 255. On the one hand, for the ”Triangulation” al-
gorithm the reason for the aforementioned range should be the
annotation process in which the radiometry of the points close
to the annotated edges may be affected. On the other hand, the
Agisoft-Metashape may produce such labels i.e., in the range
between 0 and 255, because images’ dimensions change during
the process to robustly handle large number of images. Also, Figure 4. Flowchart of the proposed approach
the manual image annotation process affects the implementa-
tion using the Agisoft-Metashape as using the ”Triangulation”
SRSGE NTUA. The two RGB images were used in order to
algorithm. Thus, we define an empirical threshold value, to
check the performance of the ”Triangulation” algorithm using
classify the points over the threshold as edge points and all the
aerial images instead of terrestrial, because the performance of
rest as non-edge points.
the algorithm using the latter was not satisfying. A sample of
3.4 3D Edge Comparison images from both datasets, is depicted in Figure 5

Several products are available in 3D space to detect 3D edges


like sparse point clouds, dense point clouds, 3D models and
textured 3D models. All of them can be georeferenced. In this
effort, a comparison between 3D edges detected in a georefer-
enced dense point cloud and on a georeferenced textured 3D
model, is conducted. Also, sparse point cloud was excluded
from the comparison, since we would like a dense represent-
ation of the edges i.e., the edges should contain as many 3D
points as possible. Apart from that, the textureless 3D model
was also omitted since texture is a fundamental characteristic
to visually detect edges on 3D models. Both, the 3D dense
point cloud and the textured 3D model were georeferenced us-
ing the same control and test points, in order to make the valid-
ation process accurate. The textured 3D model was generated
using a FARO 330X terrestrial laser scanner for the creation of
the object geometry in combination with digital images, which Figure 5. RGB images from Temple of Demeter (First row) and
were aligned using the Agisoft-Metashape pro software v.1.4., summer field course (Second Row).
for the creation of object texture. The flowchart of the method
presented in this paper is depicted in Figure 4
4.1 Semantically Enriched Point Cloud Creation

4. RESULTS In this section, the results from both the ”Triangulation” and the
”Agisoft-Metashape” methods, are presented. Figure 6 presents
One set of data used during this effort was the RGB images and the semantically enriched point clouds produced by the ”Trian-
the 3D model produced during two postgraduate theses ((Gian- gulation” algorithm. The first row depicts two 3D point clouds
nakoula, 2018; Stefanou, 2018)) which were documenting the created using the two aerial images. The difference between the
Temple of Demeter in Naxos. The documentation was conduc- point clouds is that the first one was created using the Akaze
ted using professional digital cameras and a Faro Laser Scan- while the second one using the Sift, feature extraction method.
ner, from which the textured 3D Model was created. The de- Additionally, the second row presents the semantically enriched
tection of 3D edges with the proposed approach exploits the 3D point cloud using two terrestrial images of The temple of
available RGB images while the comparison between the de- Demeter dataset.
tected 3D edges exploit the generated textured 3D Model. The
second set of data, which are two aerial RGB images, captured The semantic information is passed into the 3D point cloud as
in 2019 during a summer field course in the ancient Kymissala an extra value to the position and color values. Thus, the ASCII
in Rhodes, organized by the Laboratory of Photogrammetry file contains seven values instead of six (Figure 7).

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 12
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022
Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

9 depicts the ASCII file produced using the Agisoft-Metashape


software.

Figure 9. ASCII file produced using the Agisoft-Metashape


software

Figure 6. Semantically enriched 3D point clouds created using Finally 3D edge detection is performed by classifying the 3D
the Triangulation algorithm. First row presents the point clouds points into edge and non-edge regarding their label value. Ob-
created exploiting the aerial images while the second row serving the ASCII files of the semantically enriched 3D point
exploiting the terrestrial images clouds the label value ranges from 0 to 255, for several reasons
as described in section 3.3. Figure 10 presents an experiment
conducted to specify the empirical threshold value described in
section 3.3. The classification is performed using six different
threshold values i.e., greater than 0, 50, 80, 100, 230 and equal
to 255, to examine the detected 3D edge points under each con-
dition. By increasing the threshold value, the detected 3D edge
points seems to be closely to the desired 3D edges. A threshold
value around 100, it visually seems to be the best one because
after that value many visually correct 3D points are excluded,
resulting to shorter edges (Figure 10).

Figure 7. ASCII file produced using the Triangulation algorithm

It is obvious that the 3D semantically enriched point clouds pro-


duced by the triangulation algorithm were not of sufficient qual-
ity especially using the terrestrial images. Besides, the scope of
the ”Triangulation” algorithm was to evaluate the general idea
under the simplest conditions. Thus, the Agisoft-Metashape
photogrammetric software was exploited in combination with
the constructed four-channel images. In this experiment, sev-
enty six RGBL images were used. Figure 8 depicts the se-
mantically enriched 3D dense point cloud generated using the
Agisoft-Metashape software. The first row depicts the same
view with different colors i.e., RGB and Scalar. The 3D edges
can be observed using the scalar visualization. The second row Figure 10. Label values threshold experiment
depicts the same as the first one, but from a closer distance.
4.2 3D Edge Comparison

The comparison of the detected 3D edges, was performed re-


garding (i) the 3D edge start point and end point and (ii) the
3D distance. To be more specific, there are four 3D edges (two
horizontal and two vertical), two of them are detected from the
3D model (one horizontal and one vertical) and the other two
are detected using the proposed approach (one horizontal and
one vertical). The start and end points were clearly defined in
all point clouds. The horizontal and vertical edges which were
collected from the 3D Model are the same as those which were
collected by the proposed approach, in 3D space. Thus, the
comparison between them can be conducted. Additionally, the
accuracy of the 3D Model and the 3D dense point cloud is es-
Figure 8. Semantically enriched dense 3D point cloud created
timates to approx. 0.008m.
using the Agisoft-Metashape software.
4.2.1 3D start point and 3D end point: A series of meas-
The edge semantic information is passed into the 3D point urements were performed using the Cloud-Compare (Cloud-
cloud in a way similar to the ”Triangulation” algorithm. Figure Compare, 2003) software. To be more precise, the textured 3D

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 13
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022
Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

model was inserted in the Cloud-Compare and then, the 3D start 5. DISCUSSION OF RESULTS
point and end point of each edge were picked multiple times
using the ”point picking” tool. In fact, we got several ”obser- The presented results are visually evaluated regarding each
vations” of the start and end point of each edge from the 3D point cloud creation method. More concretely, the ”Triangu-
model. The same steps were applied using the detected, by the lation” algorithm produces a non-refined 3D point cloud for
proposed approach, 3D edges. Then, an average value of each each image pair, which obviously is computationally expens-
set of observations were calculated for the 3D model and the ive. Additionally, it struggles to produce point clouds using
3D edge detection approach for each edge and were subtracted terrestrial images as depicted in Figure 6. However, the scope
to each other according to X, Y and Z dimension (Table 1). was to evaluate the general idea and to investigate it using a
simple experiment, which was achieved. On the other hand, the
proposed method is combined with a professional photogram-
Horizontal Edge metric pipeline i.e., Agisoft-Metashape, thus the semantically
enriched 3D dense point cloud could be of state-of-the-art qual-
3D Model X (m) Y (m) Z (m) ity. In this effort the 3D point clouds were not post processed
Start Point 1012.755 1011.590 106.326 to achieve the best results. Additionally, the creation proced-
End Point 1010.670 1012.950 106.337 ure, of the four-channel images, associates correctly the pixels
between the RGB channels and the label channel since the de-
3D Edge Detection X (m) Y (m) Z (m)
tected 3D edge points are in the correct 3D position.
Start Point 1012.746 1011.574 106.33
End Point 1010.662 1012.933 106.332 The 3D edge comparison approach was performed taking into
account a hypothetical 3D vectorization procedure. To be more
Absolute Differences specific, we assume that we want to 3D vectorize the desired
Start Point 0.008 0.016 0.004 edge using either the detected 3D edge or the 3D model. Hence,
End Point 0.008 0.017 0.006 multiple observation using both data sources were conducted
for each edge. The absolute differences between each case are
depicted in Table 1. No safe conclusion can be derived for the
Vertical Edge accuracy of the proposed approached due to the limited exper-
3D Model X (m) Y (m) Z (m) iments. However, the absolute differences are into the general
desired range i.e., are not observed extreme differences and are
Start Point 1011.492 1016.868 102.937
close to the accuracy of the source data i.e., 0.008 (m).
End Point 1011.490 1016.860 106.063
3D Edge Detection X (m) Y (m) Z (m) 6. CONCLUSIONS
Start Point 1011.499 1016.849 102.914
End Point 1011.488 1016.851 106.054 In this paper, a first implementation of 3D edge detection
method which exploits four-channel images, is presented. The
Absolute Differences (m)
proposed approach was evaluated on real-world data of cultural
Start Point 0.007 0.019 0.023 heritage assets. The proposed approach has several drawbacks
End Point 0.002 0.010 0.009 which could be a starting point for future work and research.
To be more specific, automated 2D edge detection, traditional
Table 1. Start point and end point comparison and learning, algorithms could be included to the proposed
approach. Additionally, open source SfM-MVS photogram-
metric pipelines like VisualSfM and CMVS-PMVS (Visual-
SfM, 2022), Meshroom (Meshroom-AliceVision, 2022) and
4.2.2 Length: Apart from the 3D coordinates of the start OpenSfM (Mapillary-OpenSfM, 2022) could be tested using
and end points, multiple observations of the length of each edge four-channel images for the production of semantically en-
were collected. Then, the average length and the difference riched point clouds in order for the proposed approach to be
between the average length using the 3D model and the 3D edge available to anyone. Of course, a 3D vectorization procedure
detection approach, were also calculated. Each average length should be integrated into the proposed method to automate the
and the absolute difference of them are presented in Table 2. production of 3D architectural vector drawings. Additionally, a
comparison using a large amount of 3D edges should be con-
ducted to correctly evaluate the proposed approach. However,
3D Model 3D Edge Detection the proposed approach is a simple yet efficient pipeline because
Horizontal
3D Euclidean Distance (m) it exploits the SfM-MVS advantages e.g., bundle adjustment,
to detect 3D edges with the best possible accuracy depending
Average Value 2.500 2.484 on the 2D edge semantic information. Moreover, the proposed
Difference 0.016 method does not introduce additional errors, as it does not in-
volve further mathematical calculations to the SfM-MVS stand-
ard ones.
3D Model 3D Edge Detection
Vertical
3D Euclidean Distance (m)
References
Average Value 3.147 3.154
Difference 0.007 Agisoft-Metashape, 2016. Discover intelligent photogram-
metry with Metashape. http://www.agisoft.com/. Accessed:
Table 2. Length of each edge as the average 3D euclidean 2022-10-26.
distance between a set of start and end points.

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 14
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-2/W2-2022
Optical 3D Metrology (O3DM), 15–16 December 2022, Würzburg, Germany

Alcantarilla, P. F., Solutions, T., 2011. Fast explicit diffusion for Mitropoulou, A., Georgopoulos, A., 2019. AN AUTOMATED
accelerated features in nonlinear scale spaces. IEEE Trans. PROCESS TO DETECT EDGES IN UNORGANIZED
Patt. Anal. Mach. Intell, 34(7), 1281–1298. POINT CLOUDS. 4.

Alshawabkeh, Y., 2020. Linear feature extraction from Muja, M., Lowe, D., 2009. Flann-fast library for approxim-
point cloud using color information. 8(1), 28. ht- ate nearest neighbors user manual. Computer Science De-
tps://heritagesciencejournal.springeropen.com/articles/ partment, University of British Columbia, Vancouver, BC,
10.1186/s40494-020-00371-6. Canada, 5.

Bao, T., Zhao, J., Xu, M., 2015. Step edge de- Nguatem, W., Drauschke, M., Mayer, H., 2014. Localization of
tection method for 3D point clouds based on Windows and Doors in 3d Point Clouds of Facades. II-3, 87–
2D range images. 126(20), 2706–2710. ht- 94. https://www.isprs-ann-photogramm-remote-sens-spatial-
tps://linkinghub.elsevier.com/retrieve/pii/S0030402615005586. inf-sci.net/II-3/87/2014/.

Bay, H., Tuytelaars, T., Gool, L. V., 2006. Surf: Speeded up OpenCV, 2022. Open Source Computer Vision Library. ht-
robust features. European conference on computer vision, tps://opencv.org/. Accessed: 2022-10-04.
Springer, 404–417.
Poma, X. S., Riba, E., Sappa, A., 2020. Dense extreme incep-
Bazazian, D., Casas, J. R., Ruiz-Hidalgo, J., 2015. Fast and ro- tion network: Towards a robust cnn model for edge detection.
bust edge extraction in unorganized point clouds. 2015 inter- Proceedings of the IEEE/CVF Winter Conference on Applic-
national conference on digital image computing: techniques ations of Computer Vision, 1923–1932.
and applications (DICTA), IEEE, 1–8. Prewitt, J. M. et al., 1970. Object enhancement and extraction.
Canny, J. F., 1983. Finding edges and lines in images. Technical Picture processing and Psychopictorics, 10(1), 15–19.
report, MASSACHUSETTS INST OF TECH CAMBRIDGE Rublee, E., Rabaud, V., Konolige, K., Bradski, G., 2011. Orb:
ARTIFICIAL INTELLIGENCE LAB. An efficient alternative to sift or surf. 2011 International con-
ference on computer vision, Ieee, 2564–2571.
Canny, J. F., 1986. A computational approach to edge detection.
IEEE Transactions on pattern analysis and machine intelli- Senthilkumaran, N., Rajesh, R., 2009. Image segmentation-
gence, 679–698. a survey of soft computing approaches. 2009 International
Conference on Advances in Recent Technologies in Commu-
Cloud-Compare, 2003. 3D point cloud and mesh processing
nication and Computing, IEEE, 844–846.
software. https://www.danielgm.net/cc/. Accessed: 2022-10-
26. Sobel, I., Feldman, G. et al., 1968. A 3x3 isotropic gradient
operator for image processing. a talk at the Stanford Artificial
Dolapsaki, M. M., Georgopoulos, A., 2021. Edge Detection Project in, 271–272.
in 3D Point Clouds Using Digital Images. 10(4), 229. Pub-
lisher: MDPI. Stathopoulou, E. K., Battisti, R., Cernea, D., Remondino,
F., Georgopoulos, A., 2021. Semantically derived geomet-
Giannakoula, X., 2018. Geometrical Documentation of monu- ric constraints for MVS reconstruction of textureless areas.
ments using modern technologies, an application on the 13(6), 1053. Publisher: MDPI.
Temple of Demeter in Naxos. (In Greek).
Stefanou, A. B., 2018. The contribution of new technologies
Kirsch, R. A., 1971. Computer determination of the constituent to archeology: the case of the classic church of Demeter in
structure of biological images. Computers and biomedical re- Sagri Naxos. (In Greek).
search, 4(3), 315–328.
Su, Z., Liu, W., Yu, Z., Hu, D., Liao, Q., Tian, Q., Pietikäinen,
Kroon, D., 2009. Numerical optimization of kernel based image M., Liu, L., 2021. Pixel difference networks for efficient edge
derivatives. Short Paper University Twente, 3. detection. Proceedings of the IEEE/CVF International Con-
ference on Computer Vision, 5117–5127.
Lowe, D. G., 2004. Distinctive image features from scale-
invariant keypoints. International journal of computer vision, Visual-SfM, 2022. VisualSFM : A Visual Structure from
60(2), 91–110. Motion System. http://ccwu.me/vsfm/index.html/. Accessed:
2022-10-26.
Lu, X., Liu, Y., Li, K., 2019. Fast 3D line segment detection
from unorganized point cloud. Wani, M. A., Arabnia, H. R., 2003. Parallel edge-region-based
segmentation algorithm targeted at reconfigurable multiring
Mapillary-OpenSfM, 2022. An open-source Structure from network. The Journal of Supercomputing, 25(1), 43–62.
Motion library that lets you build 3D models from images.
https://opensfm.org/. Accessed: 2022-10-04. Xie, S., Tu, Z., 2015. Holistically-Nested Edge Detection. 9.

Marr, D., Hildreth, E., 1980. Theory of edge detection. Pro- Xie, Y., Tian, J., Zhu, X. X., 2020. Linking points with labels
ceedings of the Royal Society of London. Series B. Biological in 3D: A review of point cloud semantic segmentation. IEEE
Sciences, 207(1167), 187–217. Geoscience and Remote Sensing Magazine, 8(4), 38–59.

Meshroom-AliceVision, 2022. Photogrammetric Computer


Vision Framework. https://alicevision.org/. Accessed: 2022-
10-26.

This contribution has been peer-reviewed.


https://doi.org/10.5194/isprs-archives-XLVIII-2-W2-2022-9-2022 | © Author(s) 2022. CC BY 4.0 License. 15

You might also like