ESWC Demo
ESWC Demo
net/publication/336909575
CITATIONS READS
3 257
2 authors:
Some of the authors of this publication are also working on these related projects:
iMARECULTURE "Advanced VR, iMmersive serious games and Augmented REality as tools to raise awareness and access to European underwater CULTURal heritagE" View
project
MIMOSA: Are Mediterranean marine protected areas efficient against warming effects? View project
All content following this page was uploaded by Mohamed Ben Ellefi on 11 April 2021.
Aix Marseille University, CNRS, Université De Toulon, LIS UMR 7020, 13397
Marseille, France
{mohamed.ben-ellefi, pierre.drap}@univ-amu.fr
1 Introduction
With the progress of 3D technologies, photogrammetry techniques become the
adopted solution for representing science-driven data by turning photos from
small finds, to entire landscapes, into accurate 3D models.
This paper proposes a module that explicitly couples the photogrammetry
process to a semantic knowledge base modeled by our photogrammetry-oriented
ontology, Arpenteur1 . This coupling is represented in form of an export mod-
ule for Agisoft2 to transform the spatial 3D data into a knowledge base mod-
eled by the Arpenteur ontology. This exportation is particularly useful in the
pipeline process within our photogrammetry-driven toolbox Arpenteur3 for se-
mantic data-lifting: from image gathering to 3D/VR modeling coupled with the
knowledge representation by Arpenteur ontology.
This module is based on Semantic Web technologies where ontologies provide
us with the theoretic and axiomatic basis of the underlying knowledge bases.
In this context, different approaches have been proposed to permit semantic
representation and modeling of synthetic 3D content, a state of the art review
is detailed in [7].
The paper is organized as follows: Section 2 presents our solution for mapping
Agisoft python API to the Arpenteur ontology concepts detailing the adopted
1
http://arpenteur.org/ontology/Arpenteur.owl
2
https://www.agisoft.com/
3
http://www.arpenteur.org/
2 Ben Ellefi et al.
The mapping between the two softwares is limited to the generic concept of pho-
togrammetric model as defined in K. Kraus [8]: photographs, camera, internal
and external orientation, 3D points and their observations done onto the pho-
tographs. For example feature description and dense cloud are not supported by
this mapping.
These two photogrammetry software manipulate similar concepts but of
course the translation of digital data from one to the other will have to support
some adjustments. For example, the concept of Photograph in the Arpenteur is
similar to Camera in Agisoft.
In Arpenteur, a Photograph is the image produced by a camera (film-based
or digital) and the Camera is the object that produces the Photograph. This
Camera in Agisoft is translated by the concept of Sensor. It should be noted
that the concept of Sensor in Agisoft is more complex and will not be fully used,
for example it supports the notion of Plane which refers to multi-sensor camera
rig approach. This feature is not used in Arpenteur. The Arpenteur Camera
Semantic Export Module for Close Range Photogrammetry 3
Listing 1.1. An example of a SPARQL query to retrieve the position (x,y,z) and
the orientation matrix of ”John Stills CC-309” photograph to be performed on Xlendi
2018-09-21 dataset.
PREFIX
arp :< h t t p : / /www. a r p e n t e u r . o r g / o n t o l o g y / Arpenteur . owl#>
? t r a n s f o r m a t i o n arp : h a s T r a n s l a t i o n ? t r a n s l a t i o n ;
arp : h a s R o t a t i o n M a t r i x ? matrix .
5
https://github.com/benellefi/ExportAgisoftOWL
6
https://jena.apache.org/documentation/tdb/index.html
7
https://jena.apache.org/documentation/fuseki2/
8
http://www.lsis.org/groplan/survey/20180921/20180921_John_Stills_
CC-309.jpg
9
http://www.arpenteur.org/ontology/temporal/20180921.html
Semantic Export Module for Close Range Photogrammetry 5
4 Conclusions
In this paper, we introduced a module (python script) for automatic export of
the photogrammetry description into a knowledge base dataset modeled by the
Arpenteur ontology. This module extends the Agisoft software used in our pho-
togrammetry process. The automatic export is handled by mapping the Arpen-
teur ontology to the tool’s API. A real data wreck scenario Xlendi was presented
in which the photogrammetry process was automatically exported by this new
module.
In parallel with the photogrammetry description, we are currently working
on the implementation of an ontology-based virtual reality representation to
provide a panoramic view (2D / 3D / VR) of the data coupled to the semantics
knowledge.
References
1. Alvarez, L., Gómez, L., Sendra, J. R.: An algebraic approach to lens distortion by
line rectification. Journal of Mathematical Imaging and Vision, 2009.35(1), 36-50.
2. Ben Ellefi, M., Drap, P., Papini, O., Merad, D., Royer J.-P., Nawaf, M.-M., Nocerino,
E., Hyttinen, K., Sourisseau, J.- C., Gambin, T., Castro, F.: Ontology-based web
tools for retrieving photogrammetric cultural heritage models. In Underwater 3D
recording & modeling. ISPRS, Limassol, Cyprus (2019).
3. Ben Ellefi, M., Nawaf, M., Sourisseau, J. C., Gambin, T., Castro, F., Drap,
P.: Clustering over the Cultural Heritage Linked Open Dataset: Xlendi Ship-
wreck. In Proceedings of the Third International Workshop on Semantic Web for
Cultural Heritage co-located with the 15th Extended Semantic Web Conference,
SW4CH@ESWC 2018. LNCS Vol. 8, pp. 1-10. Heraklion, Crete, Greece (2018).
4. Duane, C. B.: Close-range camera calibration. Photogrammetric Engineering, 1971,
vol. 37, no 8, p. 855-866. (1971).
5. DRAP P., LEFEVRE J.: An exact formula for calculating inverse radial lens dis-
tortions. Sensors 16, 6 (2016).
6. DRAP, P., MERAD, D., HIJAZI, B., GAOUA, L., NAWAF, M.-M., SACCONE, M.,
CHEMISKY, B., SEINTURIER, J., SOURISSEAU, J.-C., GAMBIN, T., CASTRO,
F.: Underwater photogrammetry and object modeling: A case study of xlendi wreck
in malta. Sensors 15, 12 (2015), 3035130384.
7. Flotyński, J., WALCZAK, K.: Ontology-based representation and modelling of syn-
thetic 3d content: A state-of-the-art review. In Computer Graphics Forum (2017),
vol. 36, pp. 329353.
8. Kraus, K., Jansa, J., Kager, H.: Photogrammetry, vol. 1&2. Ferd. Dummlers, Verlag
Bonn, (1997).
10
https://www.lsis.org/groplan/svg/xlendi/xlendi.html