As taught at UNIMAS July 2019. based on a three day summer school by Knud Hinnerk Moeller and Victor de Boer. Includes hands on excercises using SWI-Prolog ClioPatria
This document provides a summary of linked data principles and examples. It discusses how linked data can help computers understand web data by structuring it using common standards like URIs, HTTP, RDF, and SPARQL. The key principles of linked data are explained, including using URIs to identify things, including useful information at those URIs, and linking to other URIs to discover more things. Examples of linked data applications in domains like academia, libraries, government, and media are also provided. The document concludes by discussing how linked data works technically using structured data, graphs, and W3C web standards.
Slides for my keynote presentateion "Linked Data for Digital History" presented at Semantic Web for Scientific History (SW4SH) co-located with ESWC 2015
This document discusses how user communities can help create open access to information through collaborative projects like Wikipedia. It provides examples of existing projects that allow users to collaboratively index books (Open Library Project), create metadata for people in Wikipedia (Wikipedia-Persondata-Tool), and transcribe source materials (Wikisource). The document advocates for open licenses and keeping public domain content freely accessible online.
What is #LODLAM?! Understanding linked open data in libraries, archives [and ...Alison Hitchens
This document provides an overview of linked open data (LOD) and the Resource Description Framework (RDF) and their applications in libraries, archives, and museums (LODLAM). It begins by defining linked data and how it extends standard web technologies to share structured data between computers. The document then discusses using structured, machine-readable data to describe resources like people, and how to structure this data using RDF. It provides examples of libraries and archives sharing controlled vocabularies, unique resources and holdings data as linked open data. The document concludes by reviewing current LODLAM projects and the potential for libraries and archives to both contribute and consume linked open data.
Providing open data is of interest for its societal and commercial value, for transparency, and because more people can do fun things with data. There is a growing number of initiatives to provide open data, from, for example, the UK government and the World Bank. However, much of this data is provided in formats such as Excel files, or even PDF files. This raises the question of
- How best to provide access to data so it can be most easily reused?
- How to enable the discovery of relevant data within the multitude of available data sets?
- How to enable applications to integrate data from large numbers of formerly unknown data sources?
One way to address these issues to to use the design principles of linked data (http://www.w3.org/DesignIssues/LinkedData.html), which suggest best practices for how to publish and connect structured data on the Web. This presentation gives an overview of linked data technologies (such as RDF and SPARQL), examples of how they can be used, as well as some starting points for people who want to provide and use linked data.
The presentation was given on August 8, at the Hacknight event (http://hacknight.se/) of Forskningsavdelningen (http://forskningsavd.se/) (Swedish: “Research Department”) a hackerspace in Malmö.
Victor de Boer discusses how linked data can be used for digital humanities research. He explains that linked data allows researchers to integrate heterogeneous datasets while retaining their original data models, enabling new types of analysis. Examples are given of projects that have applied linked data principles to cultural heritage data from museums, historical texts, biographical data, and maritime records. Linked data facilitates exploring connections between these datasets and reusing background knowledge from other sources.
Vocabularies as Linked Data - OUDCE March2014Keith.May
Presentation given as part of OUDCE course in Oxford 04-03-2014 on "Digital Data and Archaeology: Management, Preservation and Publishing.
Acknowledgements to Ceri Binding @Ceribin for many of the slides.
The document discusses linked open data and its possibilities for libraries. It provides an overview of linked data, explaining how it uses standard web technologies to share structured data between applications. Examples are given of library data like catalog records and authority files being exposed as linked data. Current projects involving libraries consuming and sharing linked data are also summarized, though it is noted the field is still developing.
The document discusses using linked open data and linked data principles for libraries. It covers key concepts like URIs, RDF triples, ontologies and vocabularies. It then outlines options for libraries to both consume and publish linked data, such as enriching existing catalog data by linking to external sources, creating new information aggregates, and publishing library holdings and metadata as linked open data. Challenges include a lack of common identifiers, FRBRization of existing data, and the need for content curation and new technical systems to fully realize the benefits of linked open data for libraries.
Vocabularies as Linked Data: SENESCHAL & HeritageData.orgKeith.May
This document discusses the SENESCHAL project which converted archaeological controlled vocabularies into Linked Open Data using the SKOS standard. It enabled vocabulary providers like Historic England and RCAHMS to make their thesauri available as Linked Data and facilitated concept searching and browsing. The document outlines how the vocabularies were developed and aligned with legacy data to create unique, persistent concept identifiers and relationships between concepts, datasets, and countries to improve semantic search and data sharing.
Brief overview of linked data and RDF followed by use in libraries and archives. Originally delivered at OLITA Digital Odyssey 2014. Revised for the OLA Superconference 2015
This document summarizes recent approaches to web data management including Fusion Tables, XML, and Linked Open Data (LOD). It discusses properties of web data like lack of schema, volatility, and scale. LOD uses RDF, global identifiers (URIs), and data links to query and integrate data from multiple sources while maintaining source autonomy. The LOD cloud has grown rapidly, currently consisting of over 3000 datasets with more than 84 billion triples.
Semantic Web and Linked Data for cultural heritage materials - Approaches in ...Antoine Isaac
The document discusses using semantic web technologies like linked data and the Europeana Data Model (EDM) to improve access to cultural heritage materials by enabling semantic search and exploiting relationships between concepts, objects, and vocabularies. EDM aims to preserve original metadata while allowing for interoperability by using standards like Dublin Core, SKOS, and OAI ORE. Linked data approaches can ease getting and publishing data across cultural heritage datasets by direct access to RDF descriptions via URIs.
Are New Digital Literacies Skills Neededrscd2018SusanMRob
Remarrying research and collection services around access to corpora and text mining, are new technical literacy skills needed? Was presented by Ingrid Mason (Deployment Strategist, AARNet) at the Research Support Community Day 2018
Chaos&Order: Using visualization as a means to explore large heritage collec...TimelessFuture
*note: download original powerpoint to view animations*. Presentation at 4th Int. Alexandria Workshop (19./20. October 2017) - Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives.
Authority Files and Web 2.0.
Presentation during the EDL Workshop "Extending the multilingual capacity of The European Library in the EDL project" in Stockholm 23.11.07
Widening the limits of cognitive reception with online digital library graph ...Marton Nemeth
This document discusses using semantic web technologies like linked data and RDF to improve information retrieval from digital library collections. It provides examples of semantic implementations at libraries like Europeana, the French National Library, and the German National Library. Key points covered include linking diverse data sources to facilitate discovery, creating semantic search interfaces, and addressing challenges of referencing vocabularies and evaluating semantic datasets and user experiences. The research plan proposes comparing new semantic OPACs to traditional interfaces and developing a methodology for evaluating the user experience of semantic library systems.
This document contains slides from a presentation by Pedro Szekely on RDF and related Semantic Web topics. The slides cover Unicode, URLs, URIs, namespaces, XML, XML Schema, RDF graphs, RDF syntaxes including XML and Turtle formats, and comparisons between XML and RDF. Key topics include using URIs to identify resources on the web, representing information as subject-predicate-object triples in RDF graphs, combining vocabularies using namespaces, and leveraging XML tools while making RDF more human-readable.
Connections that work: Linked Open Data demystifiedJakob .
Keynote given 2014-10-22 at the National Library of Finland at Kirjastoverkkopäivät 2014 (https://www.kiwi.fi/pages/viewpage.action?pageId=16767828) #kivepa2014
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
This presentation was provided by Ashley Clark, Northeastern University, during a NISO Virtual Conference on the topic of data curation, held on Wednesday, August 31, 2016
This document discusses the potential benefits of using linked data in libraries. It explains that linked data connects related data on the web using URIs and RDF triples. This allows data to be integrated, extended and reused. The document provides examples of how linked data could unlock library data, connect different library systems, and allow complex relationships to be modeled. Overall, it argues that linked data can help libraries share and integrate their data in new ways.
DBpedia Archive using Memento, Triple Pattern Fragments, and HDTHerbert Van de Sompel
DBpedia is the Linked Data version of Wikipedia. Starting in 2007, several DBpedia dumps have been made available for download. In 2010, the Research Library at the Los Alamos National Laboratory used these dumps to deploy a Memento-compliant DBpedia Archive, in order to demonstrate the applicability and appeal of accessing temporal versions of Linked Data sets using the Memento “Time Travel for the Web” protocol. The archive supported datetime negotiation to access various temporal versions of RDF descriptions of DBpedia subject URIs.
In a recent collaboration with the iMinds Group of Ghent University, the DBpedia Archive received a major overhaul. The initial MongoDB storage approach, which was unable to handle increasingly large DBpedia dumps, was replaced by HDT, the Binary RDF Representation for Publication and Exchange. And, in addition to the existing subject URI access point, Triple Pattern Fragments access, as proposed by the Linked Data Fragments project, was added. This allows datetime negotiation for URIs that identify RDF triples that match subject/predicate/object patterns. To add this powerful capability, native Memento support was added to the Linked Data Fragments Server of Ghent University.
In this talk, we will include a brief refresher of Memento, and will cover Linked Data Fragments, Triple Pattern Fragments, and HDT in more detail. We will share lessons learned from this effort and demo the new DBpedia Archive, which, at this point, holds over 5 billion RDF triples.
Linked Data (1st Linked Data Meetup Malmö)Anja Jentzsch
This document discusses Linked Data and outlines its key principles and benefits. It describes how Linked Data extends the traditional web by creating a single global data space using RDF to publish structured data on the web and by setting links between data items from different sources. The document outlines the growth of Linked Data on the web, with over 31 billion triples from 295 datasets as of 2011. It provides examples of large Linked Data sources like DBpedia and discusses best practices for publishing, consuming, and working with Linked Data.
This document provides an overview of linked data and the semantic web. It discusses moving from a web of documents to a web of data by making data on the web more structured and interconnected. The key aspects covered include using URIs to identify things, providing structured data about those things via standards like RDF, and including links to other related data to improve discovery. The document also explains some of the core technologies involved like RDF, RDF syntaxes, vocabularies for describing data, and publishing and accessing linked data on the web.
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
The document discusses linked open data and its possibilities for libraries. It provides an overview of linked data, explaining how it uses standard web technologies to share structured data between applications. Examples are given of library data like catalog records and authority files being exposed as linked data. Current projects involving libraries consuming and sharing linked data are also summarized, though it is noted the field is still developing.
The document discusses using linked open data and linked data principles for libraries. It covers key concepts like URIs, RDF triples, ontologies and vocabularies. It then outlines options for libraries to both consume and publish linked data, such as enriching existing catalog data by linking to external sources, creating new information aggregates, and publishing library holdings and metadata as linked open data. Challenges include a lack of common identifiers, FRBRization of existing data, and the need for content curation and new technical systems to fully realize the benefits of linked open data for libraries.
Vocabularies as Linked Data: SENESCHAL & HeritageData.orgKeith.May
This document discusses the SENESCHAL project which converted archaeological controlled vocabularies into Linked Open Data using the SKOS standard. It enabled vocabulary providers like Historic England and RCAHMS to make their thesauri available as Linked Data and facilitated concept searching and browsing. The document outlines how the vocabularies were developed and aligned with legacy data to create unique, persistent concept identifiers and relationships between concepts, datasets, and countries to improve semantic search and data sharing.
Brief overview of linked data and RDF followed by use in libraries and archives. Originally delivered at OLITA Digital Odyssey 2014. Revised for the OLA Superconference 2015
This document summarizes recent approaches to web data management including Fusion Tables, XML, and Linked Open Data (LOD). It discusses properties of web data like lack of schema, volatility, and scale. LOD uses RDF, global identifiers (URIs), and data links to query and integrate data from multiple sources while maintaining source autonomy. The LOD cloud has grown rapidly, currently consisting of over 3000 datasets with more than 84 billion triples.
Semantic Web and Linked Data for cultural heritage materials - Approaches in ...Antoine Isaac
The document discusses using semantic web technologies like linked data and the Europeana Data Model (EDM) to improve access to cultural heritage materials by enabling semantic search and exploiting relationships between concepts, objects, and vocabularies. EDM aims to preserve original metadata while allowing for interoperability by using standards like Dublin Core, SKOS, and OAI ORE. Linked data approaches can ease getting and publishing data across cultural heritage datasets by direct access to RDF descriptions via URIs.
Are New Digital Literacies Skills Neededrscd2018SusanMRob
Remarrying research and collection services around access to corpora and text mining, are new technical literacy skills needed? Was presented by Ingrid Mason (Deployment Strategist, AARNet) at the Research Support Community Day 2018
Chaos&Order: Using visualization as a means to explore large heritage collec...TimelessFuture
*note: download original powerpoint to view animations*. Presentation at 4th Int. Alexandria Workshop (19./20. October 2017) - Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives.
Authority Files and Web 2.0.
Presentation during the EDL Workshop "Extending the multilingual capacity of The European Library in the EDL project" in Stockholm 23.11.07
Widening the limits of cognitive reception with online digital library graph ...Marton Nemeth
This document discusses using semantic web technologies like linked data and RDF to improve information retrieval from digital library collections. It provides examples of semantic implementations at libraries like Europeana, the French National Library, and the German National Library. Key points covered include linking diverse data sources to facilitate discovery, creating semantic search interfaces, and addressing challenges of referencing vocabularies and evaluating semantic datasets and user experiences. The research plan proposes comparing new semantic OPACs to traditional interfaces and developing a methodology for evaluating the user experience of semantic library systems.
This document contains slides from a presentation by Pedro Szekely on RDF and related Semantic Web topics. The slides cover Unicode, URLs, URIs, namespaces, XML, XML Schema, RDF graphs, RDF syntaxes including XML and Turtle formats, and comparisons between XML and RDF. Key topics include using URIs to identify resources on the web, representing information as subject-predicate-object triples in RDF graphs, combining vocabularies using namespaces, and leveraging XML tools while making RDF more human-readable.
Connections that work: Linked Open Data demystifiedJakob .
Keynote given 2014-10-22 at the National Library of Finland at Kirjastoverkkopäivät 2014 (https://www.kiwi.fi/pages/viewpage.action?pageId=16767828) #kivepa2014
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
This presentation was provided by Ashley Clark, Northeastern University, during a NISO Virtual Conference on the topic of data curation, held on Wednesday, August 31, 2016
This document discusses the potential benefits of using linked data in libraries. It explains that linked data connects related data on the web using URIs and RDF triples. This allows data to be integrated, extended and reused. The document provides examples of how linked data could unlock library data, connect different library systems, and allow complex relationships to be modeled. Overall, it argues that linked data can help libraries share and integrate their data in new ways.
DBpedia Archive using Memento, Triple Pattern Fragments, and HDTHerbert Van de Sompel
DBpedia is the Linked Data version of Wikipedia. Starting in 2007, several DBpedia dumps have been made available for download. In 2010, the Research Library at the Los Alamos National Laboratory used these dumps to deploy a Memento-compliant DBpedia Archive, in order to demonstrate the applicability and appeal of accessing temporal versions of Linked Data sets using the Memento “Time Travel for the Web” protocol. The archive supported datetime negotiation to access various temporal versions of RDF descriptions of DBpedia subject URIs.
In a recent collaboration with the iMinds Group of Ghent University, the DBpedia Archive received a major overhaul. The initial MongoDB storage approach, which was unable to handle increasingly large DBpedia dumps, was replaced by HDT, the Binary RDF Representation for Publication and Exchange. And, in addition to the existing subject URI access point, Triple Pattern Fragments access, as proposed by the Linked Data Fragments project, was added. This allows datetime negotiation for URIs that identify RDF triples that match subject/predicate/object patterns. To add this powerful capability, native Memento support was added to the Linked Data Fragments Server of Ghent University.
In this talk, we will include a brief refresher of Memento, and will cover Linked Data Fragments, Triple Pattern Fragments, and HDT in more detail. We will share lessons learned from this effort and demo the new DBpedia Archive, which, at this point, holds over 5 billion RDF triples.
Linked Data (1st Linked Data Meetup Malmö)Anja Jentzsch
This document discusses Linked Data and outlines its key principles and benefits. It describes how Linked Data extends the traditional web by creating a single global data space using RDF to publish structured data on the web and by setting links between data items from different sources. The document outlines the growth of Linked Data on the web, with over 31 billion triples from 295 datasets as of 2011. It provides examples of large Linked Data sources like DBpedia and discusses best practices for publishing, consuming, and working with Linked Data.
This document provides an overview of linked data and the semantic web. It discusses moving from a web of documents to a web of data by making data on the web more structured and interconnected. The key aspects covered include using URIs to identify things, providing structured data about those things via standards like RDF, and including links to other related data to improve discovery. The document also explains some of the core technologies involved like RDF, RDF syntaxes, vocabularies for describing data, and publishing and accessing linked data on the web.
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
The document discusses several options for publishing data on the Semantic Web. It describes Linked Data as the preferred approach, which involves using URIs to identify things and including links between related data to improve discovery. It also outlines publishing metadata in HTML documents using standards like RDFa and Microdata, as well as exposing SPARQL endpoints and data feeds.
This document provides an overview of the Resource Description Framework (RDF). It begins with background information on RDF including URIs, URLs, IRIs and QNames. It then describes the RDF data model, noting that RDF is a schema-less data model featuring unambiguous identifiers and named relations between pairs of resources. It also explains that RDF graphs are sets of triples consisting of a subject, predicate and object. The document also covers RDF syntax using Turtle and literals, as well as modeling with RDF. It concludes with a brief overview of common RDF tools including Jena.
The document provides an introduction to Prof. Dr. Sören Auer and his background in knowledge graphs. It discusses his current role as a professor and director focusing on organizing research data using knowledge graphs. It also briefly outlines some of his past roles and major scientific contributions in the areas of technology platforms, funding acquisition, and strategic projects related to knowledge graphs.
This document provides an introduction to the RDF data model. It describes RDF as a data model that represents data as subject-predicate-object triples that can be used to describe resources. These triples form a directed graph. The document provides examples of RDF triples and graphs, and compares the RDF data model to relational and XML data models. It also describes common RDF formats like RDF/XML, Turtle, N-Triples, and how RDF graphs from different sources can be merged.
As part of a 5 series discussion, this informal learning group discussion focused on the overview of Semantic web and an introduction to Linked Data principles. Additionally participants received an overview of the foundations of triple statement. Instructor then led a hands on triple statement activity
This document discusses the Semantic Web and Linked Data. It provides an overview of key Semantic Web technologies like RDF, URIs, and SPARQL. It also describes several popular Linked Data datasets including DBpedia, Freebase, Geonames, and government open data. Finally, it discusses the Yahoo BOSS search API and WebScope data for building search applications.
1. The document discusses the Semantic Web and how publishing structured data using technologies like RDF and SPARQL allows machines to understand information and make connections between different data sources.
2. It describes the Archipel research project which uses Semantic Web technologies like RDF and SPARQL Views to interconnect distributed cultural heritage data and provide new ways to access and combine the data.
3. Participating in the Semantic Web can open up new business opportunities by enabling novel ways of combining and sharing data between organizations.
This document provides an overview of a tutorial on Linked Data for the Humanities. The tutorial covers Linked Data basics such as its history and building blocks, including URIs, HTTP, RDF, and SPARQL. It also discusses producing and consuming Linked Data, as well as hybrid methods. The tutorial aims to help participants understand URI resolution, experience graph traversal, and grasp content negotiation through hands-on exercises using tools like cURL.
This document provides an overview and discussion of graph databases, property graphs, semantic graphs using RDF, and the relationships between them. It discusses different file formats, query languages, APIs, and database models that can be used with each. While property graphs and semantic graphs have similarities in representing nodes, edges, and properties, the main differences are that property graphs do not natively support metadata on relationships or semantics, whereas semantic graphs in RDF do. The document considers when each may be suitable and how they are used in practice.
RDF is a general method to decompose knowledge into small pieces, with some rules about the semantics or meaning of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with knowledge expressed in RDF.
This document provides an overview of the Semantic Web, RDF, SPARQL, and triplestores. It discusses how RDF structures and links data using subject-predicate-object triples. SPARQL is introduced as a standard query language for retrieving and manipulating data stored in RDF format. Popular triplestore implementations like Apache Jena and applications of linked data like DBPedia are also summarized.
The document introduces the Semantic Web and the key technologies that enable it, including RDF, RDF Schema, OWL, and SPARQL. RDF allows for describing resources and relationships between them using triples. RDF Schema extends RDF with a vocabulary for describing properties and classes of resources. OWL builds on RDF and RDF Schema to provide additional expressive power for defining complex ontologies. SPARQL is a query language for retrieving and manipulating data stored in RDF format. These technologies work together to transform the existing web of documents into a web of linked data that can be processed automatically by machines.
First Steps in Semantic Data Modelling and Search & Analytics in the CloudOntotext
This webinar will break the roadblocks that prevent many from reaping the benefits of heavyweight Semantic Technology in small scale projects. We will show you how to build Semantic Search & Analytics proof of concepts by using managed services in the Cloud.
The document discusses knowledge representation on the Semantic Web. It introduces the need to formally represent information on the web using languages that allow computers to process and reason with the information. It describes the approach of using ontology languages like RDF and OWL to develop domain models and conceptualizations that provide shared interpretations of information across sources. It explains some of the basic constructs in ontology-based knowledge representation using these languages, including classes, properties, subclasses and restrictions.
APIs and the Semantic Web: publishing information instead of dataDimitri van Hees
Learn the difference between data and information and get Linked Data to work by adding a sixth star – using APIs – to Sir Tim Berners-Lee’s 5-Star model for publishing information on the Semantic Web. This session includes an introduction to Linked Open Data, Linked Closed Data, JSON-LD and the 5-Star model and provides a step-by-step walk-through of a successful technical implementation with your API.
The document discusses linking open city data on the semantic web by first standardizing the data formats, then defining shared semantics between cities, and finally expressing the data as Resource Description Framework (RDF) linked according to a basic ontology to enable querying with SPARQL. The goal is to make city data on the web more discoverable and linked to related open data from other cities and domains through consistent unique identifiers, data types, and semantic relationships defined in the ontologies.
The Benefits of Linking Metadata for Internal and External users of an Audiov...Victor de Boer
Slides for the MTSR2018 presentation for the paper The Benefits of Linking Metadata for Internal and
External users of an Audiovisual Archive by Victor de Boer, Tim de Bruyn, John Brooks and Jesse de Vos
Like other heritage institutions, audiovisual archives adopt structured vocabularies for their metadata management. With Semantic Web and Linked Data now becoming more and more stable and commonplace technologies, organizations are looking now at linking these vocabularies to external sources, for example those of Wikidata, DBPedia or GeoNames. However, the benefits of such endeavors to the organizations are generally underexplored. In this paper, we present an in-depth case study into the benefits of linking the “Common Thesaurus for Audiovisual Archives” (or GTAA) and the general-purpose dataset Wikidata. We do this by identifying various use cases for user groups that are both internal as well as external to the organization. We describe the use cases and various proofs-of-concept prototypes that address these use cases.
UX Challenges of Information Organisation: Assessment of Language Impairment ...Victor de Boer
Presentation at #ICTOPEN2018 for the ABC-KB project "UX Challenges of Information Organisation: Assessment of Language Impairment in Bilingual Children" by Dana Hakman, Cerise Muller, Victor de Boer, Petra Bos
Interactive Dance Choreography Assistance presentation for ACE entertainment ...Victor de Boer
The document describes research into developing an automated tool to assist choreographers in their creative process. A survey found that some choreographers are interested in such a tool. A proof-of-concept mobile app was created with different dance styles and rule-based strategies for generating variations. An evaluation found that variations based on an ontology were better than random variations. Presentation through 3D animation was preferred over text, 2D animation or audio. Future work includes developing better dance movement representations and reasoning capabilities.
Fahad Ali's slides for Machine to-machine communication in rural conditions ...Victor de Boer
This document describes KasadakaNet, a machine-to-machine communication system for sharing data in rural areas using a "pass-by" approach. It uses a combination of sneakernet transport of devices called Kasadakas along with local WiFi networks created by a device called the Wifi-donkey. The system was evaluated through experiments measuring factors like success rates and request times under different conditions like travel speed and query sizes. The results showed the system can enable knowledge sharing in rural scenarios with low-cost hardware and open source components, providing a viable approach for machine-to-machine capabilities in information and communication technologies for development.
Linking African Traditional Medicine Knowledge - by Gossa LoVictor de Boer
Slides for Gossa Lo's presentation on Linking African Traditional Medicine Knowledge (Lo, de Boer, Schlobach) at the SWAT4LS conference.
abstract African Traditional Medicine (ATM) is widely used in Africa as the first-line of treatment thanks to its accessibility and affordability. However, the lack of formalization of this knowledge can lead to safety issues and malpractice. This paper investigates a possible contribution of the Semantic Web in realizing the formalization and integration of ATM with data on conventional medicine. As a proof of concept we convert various ATM datasets and link them to conventional medical data. This results in a Linked ATM knowledge graph. We finally give some examples with some interesting SPARQL queries and insightful results.
Enriching Media Collections for Event-based ExplorationVictor de Boer
Slides for the MTSR2017 presentation on event enrichment in DIVE+ in the context of CLARIAH.
By: Victor de Boer, Liliana Melgar, Oana Inel, Carlos Martinez Ortiz, Lora Aroyo, and Johan Oomen
Abstract: Scholars currently have access to large heterogeneous media collections on the Web, which they use as sources for their research. Exploration of such collections is an important part in their research, where
scholars make sense of these heterogeneous datasets. Knowledge graphs which relate media objects, people and places with historical events can provide a valuable structure for more meaningful and serendipitous browsing. Based on extensive requirements analysis done with historians and media scholars, we present a methodology to publish, represent, enrich, and link heritage collections so that they can be explored by domain expert users. We present four methods to derive events from media object descriptions. We also present a case study where four datasets with mixed media types are made accessible to scholars and describe the building blocks for event-based proto-narratives in the knowledge graph.
The document discusses two techniques explored by the Netherlands Institute for Sound and Vision to enrich archival audiovisual material: 1) Developing a text-to-speech engine using the voice recordings of a famous Dutch news anchor to generate audio for text, and 2) Using deep learning to colorize old black-and-white video footage. It provides details on developing a limited-domain text-to-speech system and experimenting with colorizing video frames from newsreels. The techniques showed potential for engaging new audiences with the archival collections, though challenges remained in speech recognition, video quality, and colorization accuracy.
User-centered Data Science for Digital HumanitiesVictor de Boer
User-centered Data Science for Digital Humanities: DIVE, Dutch Ships and Sailors and ArchimediaL as presented during the "Network Institute meets CLUE+" event.
Continuous enrichment and linking of heterogeneous collections brings new possibilities for access, analysis. Using automatic methods. Always with human(s) in the loop
Linked Data for Audiovisual Archives (Guest lecture at NISV)Victor de Boer
Guest lecture for the Master programme "Preservation and Presentation of the Moving Image" from UvA about "Linked Data for Audiovisual Archives". The guest lecture was part of educational activities at Netherlands Institute for Sound and Vision
Semantic Technology for Development: Semantic Web without the Web?Victor de Boer
Slides for my keynote address for the joint session of the SALAD workshop and DBPedia day at SEMANTiCS2017. The talk addresses the need for research into the opportunities and challenges for Linked Data in the context of ICT for Development. It shows current work on Kasadaka, Semantic Web in an SMS and sneakernets http://salad2017.linked.services/ http://semantics.cc
1) DIVE+ is a project that aims to provide interactive exploration and discovery of integrated online multimedia collections using linked open data to connect metadata from various cultural heritage collections.
2) It extracts events, actors, places and other entities from collection metadata using both original thesauri and automated techniques like named entity recognition. These are linked to media objects to support event-centric browsing.
3) Over 350,000 media objects from four collections have been enriched with over 200,000 events and other entities through these techniques. The data is available through a SPARQL endpoint for deep exploration of interconnected entities in the collections.
A few slides to introduce the cultuurlink tool developed by Spinque for Netherlands Institute for Sound and Vision. These were presented at the second CLARIAH LOD workshop.
Intro to Linked, Dutch Ships and Sailors and SPARQL handson Victor de Boer
The document discusses Linked Data and SPARQL concepts including linking heterogeneous data sources without forcing a single data model. It describes using HTTP URIs and RDF to identify and describe resources on the web according to the four rules of Linked Data. The document provides an example of linking Dutch ship and sailor data from different sources and querying it using SPARQL. It emphasizes that Linked Data allows for flexible integration and reuse of existing data sources.
VU ICT4D symposium 2017 Francis Dittoh Mr. MeteoVictor de Boer
Mr. Meteo is a proposed system to provide weather information to rural farmers in a developing country via multiple channels. It would adapt to the local context based on input from farmers, local institutions, and NGOs. The system would source weather data from national meteorological services and local weather stations, and deliver personalized forecasts to farmers through voice calls, SMS, radio, and the internet using appropriate technologies. The goal is to develop a sustainable and user-centered solution to help farmers deal with issues like variable weather conditions.
The document discusses killer applications, which are apps that are so useful or desirable that they drive adoption of the larger platform. Examples given include Google Maps, news/weather SMS services, email/messaging apps, and banking apps. The document also discusses developing mobile apps for farmers and NGOs in Africa to improve their work and lives, with lessons learned around understanding local culture, organizations, technology adoption in rural areas, available devices, and infrastructure constraints.
VU ICT4D symposium 2017 Gayo Diallo Towards a Digital African Traditional Hea...Victor de Boer
This document discusses using semantic web technologies to help improve access to and use of African traditional medicine (ATM). It proposes developing an integrated ICT approach that would allow various stakeholders to safely and effectively access ATM knowledge and practices. This would involve formalizing ATM knowledge into a graph stored as linked open data and delivered through voice-based queries. Services would address end users' needs while addressing issues like ethics, intellectual property, and sustainability. The goal is to leverage ATM through technology solutions that facilitate inclusion in local languages.
VU ICT4D symposium 2017 Wendelien Tuyp: Boosting african agriculture Victor de Boer
The document discusses two perspectives on boosting African agriculture: the industrial agribusiness model promoted by G8 countries and the New Alliance for Food Security and Nutrition initiative, and the smallholder farming model. The industrial model focuses on large-scale monocultures, high yields, and cash crops for global markets using mechanization and external inputs. However, this approach raises questions about who benefits and can displace farmers. In contrast, smallholder farms are more resilient, use crop diversity for local markets, and are key to global food security despite being more labor intensive and lower yielding. Experts argue for supporting the smallholder model through advisory services and helping farmers innovate sustainably.
Rudy Marsman's thesis presentation slides: Speech synthesis based on a limite...Victor de Boer
This document discusses using a limited speech corpus of recordings from Dutch news anchor Philip Bloemendal to develop a text-to-speech (TTS) engine. It evaluates how much of the Dutch language can be synthesized using the corpus and methods to improve it, like finding synonyms and decompounding compounds. It also explores using neural networks to colorize old black-and-white video footage from the archive to make it more engaging for viewers. While the TTS engine works well for common words, full sentences have lower coverage, and colorization introduces artifacts but can increase attention to the archive's collection.
AR3201 WORLD ARCHITECTURE AND URBANISM EARLY CIVILISATIONS TO RENAISSANCE QUE...Mani Sasidharan
UNIT I PREHISTORY TO RIVER VALLEY CIVILISATIONS
UNIT II PERSIA, GREECE AND ROME
UNIT III JUDAISM, CHRISTIANITY AND ISLAM
UNIT IV MEDIEVAL EUROPE
UNIT V RENAISSANCE IN EUROPE
Christian education is an important element in forming moral values, ethical Behaviour and
promoting social unity, especially in diverse nations like in the Caribbean. This study examined
the impact of Christian education on the moral growth in the Caribbean, characterized by
significant Christian denomination, like the Orthodox, Catholic, Methodist, Lutheran and
Pentecostal. Acknowledging the historical and social intricacies in the Caribbean, this study
tends to understand the way in which Christian education mold ethical decision making, influence interpersonal relationships and promote communal values. These studies’ uses, qualitative and quantitative research method to conduct semi-structured interviews for twenty
(25) Church respondents which cut across different age groups and genders in the Caribbean. A
thematic analysis was utilized to identify recurring themes related to ethical Behaviour, communal values and moral development. The study analyses the three objectives of the study:
how Christian education Mold’s ethical Behaviour and enhance communal values, the role of
Christian educating in promoting ecumenism and the effect of Christian education on moral
development. Moreover, the findings show that Christian education serves as a fundamental role
for personal moral evaluation, instilling a well-structured moral value, promoting good
Behaviour and communal responsibility such as integrity, compassion, love and respect. However, the study also highlighted challenges including biases in Christian teachings, exclusivity and misconceptions about certain practices, which impede the actualization of
*Order Hemiptera:*
Hemiptera, commonly known as true bugs, is a large and diverse order of insects that includes cicadas, aphids, leafhoppers, and shield bugs. Characterized by their piercing-sucking mouthparts, Hemiptera feed on plant sap, other insects, or small animals. Many species are significant pests, while others are beneficial predators.
*Order Neuroptera:*
Neuroptera, also known as net-winged insects, is an order of insects that includes lacewings, antlions, and owlflies. Characterized by their delicate, net-like wing venation and large, often prominent eyes, Neuroptera are predators that feed on other insects, playing an important role in biological control. Many species have aquatic larvae, adding to their ecological diversity.
Available for Weekend June 6th. Uploaded Wed Evening June 4th.
Topics are unlimited and done weekly. Make sure to catch mini updates as well. TY for being here. More upcoming this summer.
A 8th FREE WORKSHOP
Reiki - Yoga
“Intuition” (Part 1)
For Personal/Professional Inner Tuning in. Also useful for future Reiki Training prerequisites. The Attunement Process. It’s all about turning on your healing skills. See More inside.
Your Attendance is valued.
Any Reiki Masters are Welcomed
More About:
The ‘Attunement’ Process.
It’s all about turning on your healing skills. Skills do vary as well. Usually our skills are Universal. They can serve reiki and any relatable Branches of Wellness.
(Remote is popular.)
Now for Intuition. It’s silent by design. We can train our intuition to be bold or louder. Intuition is instinct and the Senses. Coded in our Workshops too.
Intuition can include Psychic Science, Metaphysics, & Spiritual Practices to aid anything. It takes confidence and faith, in oneself.
Thank you for attending our workshops.
If you are new, do welcome.
Grad Students: I am planning a Reiki-Yoga Master Course. I’m Fusing both together.
This will include the foundation of each practice. Both are challenging independently. The Free Workshops do matter. They can also be downloaded or Re-Read for review.
My Reiki-Yoga Level 1, will be updated Soon/for Summer. The cost will be affordable.
As a Guest Student,
You are now upgraded to Grad Level.
See, LDMMIA Uploads for “Student Checkin”
Again, Do Welcome or Welcome Back.
I would like to focus on the next level. More advanced topics for practical, daily, regular Reiki Practice. This can be both personal or Professional use.
Our Focus will be using our Intuition. It’s good to master our inner voice/wisdom/inner being. Our era is shifting dramatically. As our Astral/Matrix/Lower Realms are crashing; They are out of date vs 5D Life.
We will catch trickster
energies detouring us.
(See Presentation for all sections, THX AGAIN.)
Stewart Butler - OECD - How to design and deliver higher technical education ...EduSkills OECD
Stewart Butler, Labour Market Economist at the OECD presents at the webinar 'How to design and deliver higher technical education to develop in-demand skills' on 3 June 2025. You can check out the webinar recording via our website - https://oecdedutoday.com/webinars/ .
You can check out the Higher Technical Education in England report via this link 👉 - https://www.oecd.org/en/publications/higher-technical-education-in-england-united-kingdom_7c00dff7-en.html
You can check out the pathways to professions report here 👉 https://www.oecd.org/en/publications/pathways-to-professions_a81152f4-en.html
Happy Summer Everyone. This is also timeless for future viewing.
You all have been upgraded from ‘Guest’ Students to ‘Graduate’ Students. Do Welcome Back. For new guests, please see our free weekly workshops from Spring ‘25’
Blessings, Love, and Namaste’.
Do Welcome to Summer ‘25’ for LDMMIA.
TY, for surviving our First Season/Term of our Reiki Yoga Workshops. These presentations/workshop are designed for your energy wellness.
Also, professional expansion for Summer ‘25’. All updates will be uploaded here and digital notes within our Merch Shop. (I am Completely, using the suggestions of AI for my Biz style. Its spooky accurate. So far, AI has been very helpful for office and studio admin. I even updated my AI avatars. Similar to my SL Meta avatar.)
Do take Care of yourselves. This is only a Bonus Checkin. The Next Workshop will be Lecture/Session 8. I will complete by Friday.
https://ldm-mia.creator-spring.com/
Coleoptera, commonly known as beetles, is the largest order of insects, comprising approximately 400,000 described species. Beetles can be found in almost every habitat on Earth, exhibiting a wide range of morphological, behavioral, and ecological diversity. They have a hardened exoskeleton, with the forewings modified into elytra that protect the hind wings. Beetles play important roles in ecosystems as decomposers, pollinators, and food sources for other animals, while some species are considered pests in agriculture and forestry.
Trends Spotting Strategic foresight for tomorrow’s education systems - Debora...EduSkills OECD
Deborah Nusche, Senior Analyst, OECD presents at the OECD webinar 'Trends Spotting: Strategic foresight for tomorrow’s education systems' on 5 June 2025. You can check out the webinar on the website https://oecdedutoday.com/webinars/ Other speakers included: Deborah Nusche, Senior Analyst, OECD
Sophie Howe, Future Governance Adviser at the School of International Futures, first Future Generations Commissioner for Wales (2016-2023)
Davina Marie, Interdisciplinary Lead, Queens College London
Thomas Jørgensen, Director for Policy Coordination and Foresight at European University Association
Pragya Champion's Chalice is the annual Intra Pragya General Quiz hosted by the club's outgoing President and Vice President. The prelims and finals are both given in the singular set.
Artificial intelligence Presented by JM.jmansha170
AI (Artificial Intelligence) :
"AI is the ability of machines to mimic human intelligence, such as learning, decision-making, and problem-solving."
Important Points about AI:
1. Learning – AI can learn from data (Machine Learning).
2. Automation – It helps automate repetitive tasks.
3. Decision Making – AI can analyze and make decisions faster than humans.
4. Natural Language Processing (NLP) – AI can understand and generate human language.
5. Vision & Recognition – AI can recognize images, faces, and patterns.
6. Used In – Healthcare, finance, robotics, education, and more.
Owner By:
Name : Junaid Mansha
Work : Web Developer and Graphics Designer
Contact us : +92 322 2291672
Email : [email protected]
How to Create Time Off Request in Odoo 18 Time OffCeline George
Odoo 18 provides an efficient way to manage employee leave through the Time Off module. Employees can easily submit requests, and managers can approve or reject them based on company policies.
How to Manage Allocations in Odoo 18 Time OffCeline George
Allocations in Odoo 18 Time Off allow you to assign a specific amount of time off (leave) to an employee. These allocations can be used to track and manage leave entitlements for employees, such as vacation days, sick leave, etc.
This presentation has been made keeping in mind the students of undergraduate and postgraduate level. To keep the facts in a natural form and to display the material in more detail, the help of various books, websites and online medium has been taken. Whatever medium the material or facts have been taken from, an attempt has been made by the presenter to give their reference at the end.
In the seventh century, the rule of Sindh state was in the hands of Rai dynasty. We know the names of five kings of this dynasty- Rai Divji, Rai Singhras, Rai Sahasi, Rai Sihras II and Rai Sahasi II. During the time of Rai Sihras II, Nimruz of Persia attacked Sindh and killed him. After the return of the Persians, Rai Sahasi II became the king. After killing him, one of his Brahmin ministers named Chach took over the throne. He married the widow of Rai Sahasi and became the ruler of entire Sindh by suppressing the rebellions of the governors.
How to Configure Add to Cart in Odoo 18 WebsiteCeline George
In this slide, we’ll discuss how to configure the Add to Cart functionality in the Odoo 18 Website. This feature enhances the shopping experience by offering three flexible options: Stay on the Product Page, Go to the Cart, or Let the User Decide through a dialog box.
Uterine Prolapse, causes type and classification,its managmentRitu480198
One day workshop Linked Data and Semantic Web
1. Linked Data and Semantic Web Workshop
UNIMAS, Sarawak, Malaysia
1-7-2019
Victor de Boer
With slides from Knud Hinnerk Moeller
2. Today’s program
Principles of Linked Data
Building Blocks of Linked Data
Handson: graph thinking
Writing triples in Turtle
Handson: Turtle
Triple stores
Handson: Exploring triples
Querying Linked Data
Handson Sparql
I am super-flexible, stop me at any time!
7. The Internet (of machines)
Source: Tim Berners-Lee, “Levels of Abstraction”,
http://www.w3.org/DesignIssues/Abstractions.html
8. The World Wide Web (of Documents)
Source: Tim Berners-Lee, “Levels of Abstraction”,
http://www.w3.org/DesignIssues/Abstractions.html
URIs, HTTP, and HTML
9. The World Wide Web (of Documents)
Source: Tim Berners-Lee, “Levels of Abstraction”,
http://www.w3.org/DesignIssues/Abstractions.html
10. The World Wide Web (of Data)
Source: Tim Berners-Lee, “Levels of Abstraction”,
http://www.w3.org/DesignIssues/Abstractions.html
the Semantic Web
Linked Data
the Web of Data
12. The World Wide Web (of Data)
Source: Tim Berners-Lee, “Levels of Abstraction”,
http://www.w3.org/DesignIssues/Abstractions.html
• the Semantic Web
• Linked Data
• the Web of Data
13. Examples of Linked Data
• Academia, Research
• Community
• Libraries, Museums, Cultural Heritage
• Government and public institutions
(Open Data)
• Media
• Business
17. Linked Data
Machine readable format
Standardized
Flexibility to connect heterogeneous data
Link what can be linked
re-use and re-usability
OBJECT EVENT
PLACE
TIME
PERSON
CONCEPT
PROVENANCE
18. Open Data
is about licenses
to allow reuse
Linked Data
is about
technology for
interoperability
Linked Open Data?
www.w3.org/designissues/linkeddata.html
21. How does all this work?
• Structured data not documents
• Graph (networked) data!
• W3C Web standards stack
– URIs, HTTP, RDF, RDFa, RDFS, OWL, SKOS, SPARQL,
etc.
22. Resource Description Framework
W3C standard
RDF extends the linking structure of the Web to use URIs to name
the relationship between things as well as the two ends of the link
(this is usually referred to as a “triple”). Using this simple model, it
allows structured and semi-structured data to be mixed, exposed,
and shared across different applications.
https://www.w3.org/RDF/
23. Rules of Linked Data
1. Use HTTP IRI (Internationalized Resource Identifiers)
as names for things
2. When someone looks up a URI, provide useful
information, using the standards (RDF)
3. Include links to other URIs. so that they can discover
more things.
From http://www.w3.org/DesignIssues/LinkedData.html
24. Use HTTP IRIs for Things
Internationalised Resource Identifier (IRI)
is a string of characters used to identify a
name of a resource
http://rijksmuseum.nl/data/painting001
I can go there (dereference) and then I
get information about it
• HTML page for humans
• RDF data for machines
25. Semantic Web standard for writing down data, information
(Subject, Relation, Object)
<Painting001, has_location, Amsterdam>
Resource Description Framework (RDF)
Painting001 Amsterdam
has_location
26. Resource Description Format (RDF)
Triples form Graphs
rijks:Painting001
geo:Haarlem
rijks:Frans_Hals
147590
52.38084, 4.63683
geo:Noord-Holland
geo:Netherlands
rijks:Painting002
42. Hands-on Session 1
• Introduce yourselves to each other!
• Draw a social graph of your group
• Represent each member of the group
• Give everyone a name
• You know each other now, so you can connect to
each other in the graph
• Maybe add other data about yourselves:
– Hometown
– University
– Things you like (e.g., music, films, …)
44. Building Blocks of Linked Data
RDF, Triples,
N-Triples, Turtle,
Reusing Vocabularies
46. name
located in
located in
located in
population
population
capital
People’s
Republic of
China
Beijing
SJTU
23,019,148
20,693,000
Shanghai Jiao Tong
University
name
Shanghai
上海
SJTU name "Shanghai Jiao Tong University"
SJTU located in Shanghai
Shanghai name "上海"
Shanghai population "23,019,148"
Shanghai located in People’s Republic of China
People’s Republic of China capital Beijing
Beijing located in People’s Republic of China
Beijing population "20,693,000"
• Graph
• Triple
47. SJTU located in Shanghai
Shanghai name "上海"
Shanghai population "23,019,148"
Beijing population "20,693,000"
RDF*
*Resource Description Framework
Subject Predicate Object
Subject Predicate Object
50. name
located in
located in
located in
population
population
capital
peoples_
republic_of
_china
beijing
sjtu
23,019,148
20,693,000
Shanghai Jiao Tong
University
name
shanghai
上海
51. <sjtu> <located_in> <shanghai> .
<shanghai> <name> "u4E0Au6D77" .
<shanghai> <population> "23019148" .
<beijing> <population> "20693000" .
Resources – URIs
URI URI URI
URI URI
52. Use HTTP-URIs!
• sjtu, name, located_in: valid URIs, but no
scheme, no host, just a path
• any URI valid: ftp://files.nasa.gov, sjtu, urn:isbn:0451450523,
etc.
• but:
– RDF is datamodel for the Web
– Web is based on HTTP
– HTTP-URIs can be resolved, looked up
use HTTP-URIs: http://data.example.org/sjtu
55. Named Graphs
• divide RDF graph in a
dataset into several
subgraphs
• each subgraph labelled
with a URI
• useful for keeping track of
provenance, timestamps,
versioning, etc.
56. RDF – Summary
• Graph data model for the Web
• Triples (or “statements”):
– <subject> <predicate> <object>
– (or <thing> <relationship> <thing>)
• Resources
– Things about which we want to make statements
– URIs (ideally HTTP URIs)
• Literals:
– Values like strings, numbers, dates, booleans, …
– Either language tag (zh, en, …) or XML Schema datatype
• Subjects and predicates are always resources
• Objects can be resources or literals
• Named Graphs (not standard): divide graph into subgraphs
57. RDF – Summary
• N-Triples Syntax:
– most basic RDF syntax; very verbose
– one triple per line
– line terminated with .
– resources (URIs) enclosed in < >
– literals enclosed in " "
– qualify literals with language tag: @zh
– or with datatype:
^^<http://www.w3.org/2001/XMLSchema#int>
http://www.w3.org/TR/rdf-testcases/#ntriples
58. Turtle Syntax
@prefix data: <http://data.example.org/> .
@prefix vocab: <http://voc.example.org/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
data:shanghai
vocab:located_in data:peoples_republic_of_china ;
vocab:name "Shang-hai"@ga, "Shanghai"@en, "上海"@zh ;
vocab:population "23019148"^^xsd:int .
data:sjtu
vocab:located_in data:shanghai ;
vocab:name "Shanghai Jiao Tong University"@en .
define prefixes
http://www.w3.org/TR/turtle/
abbreviate URIs as CURIEs
group triples with
same subject,predicate
group triples with
same subject
Unicode
59. CURIEs
• Compact URIs
• replace URI up to last element with prefix
• define prefix in Turtle:
http://www.w3.org/TR/curie/
http://www.w3.org/2001/XMLSchema#date
xsd
xsd:date
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
“namespace”
60. Turtle: Group Triples
• use ; to group triples with same subject
data:shanghai vocab:located_in data:peoples_republic_of_china .
data:shanghai vocab:name "Shang-hai"@ga .
data:shanghai vocab:name "Shanghai"@en .
data:shanghai vocab:name "上海"@zh .
data:shanghai vocab:population "23019148"^^xsd:int .
data:shanghai
vocab:located_in data:peoples_republic_of_china ;
vocab:name "Shang-hai"@ga ;
vocab:name "Shanghai"@en ;
vocab:name "上海"@zh ;
vocab:population "23019148"^^xsd:int .
61. Turtle: Group Triples
• use , to group triples with same
subject,predicate
data:shanghai vocab:name "Shang-hai"@ga .
data:shanghai vocab:name "Shanghai"@en .
data:shanghai vocab:name "上海"@zh .
data:shanghai vocab:name "Shang-hai"@ga, "Shanghai"@en, "上海"@zh .
62. Turtle - Summary
• human-readable, less verbose syntax
• Turtle is based on N-Triples
(N-Triples ⊆ Turtle)
• Unicode
• shorten URIs with CURIEs
• group triples with common elements
63. Other Syntaxes
• RDF/XML
– XML-based syntax
– still widely used, but less readable than Turtle
• RDFa
– RDF embedded in HTML, using element attributes
• JSON-LD
– JSON serialisation
• Named Graph Support: Trig (Turtle), Trix
(RDF/XML), N-Quads (N-Triples)
64. Reuse things: Vocabularies
(Ontologies, Schemata)
• URIs are globally unique, so we can use globally valid
terminology
• necessary for
– data integration
– Inferencing
• Vocabularies define
– properties to use as predicates
– classes to assign types to resources
• just like software libraries, vocabularies are data libraries
65. Vocabularies: Examples
• RDF and RDFS: basic definitions of objects,
properties, class-relations
• OWL: Description logics
• FOAF (Friend of a Friend): People,
Organisations, Social Networks
• schema.org (Google, Yahoo!, Bing, Yandex):
cross-domain, what search engines are
interested in (people, events, products,
locations)
• Dbpedia (Wikipedia as LOD): cross-domain
• Dublin Core (Bibliographic): publications,
authors, media, etc.
• Good Relations: business, products, etc.
https://lov.linkeddata.es/
66. FOAF Examples: Some Data
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix people: <http://data.example.org/people/> .
people:knud
a foaf:Person ;
foaf:name "Knud Möller"@de ;
foaf:knows people:victor .
people:victor
a foaf:Person ;
foaf:name "Victor de Boer"@nl ;
foaf:knows people:knud .
68. FOAF Examples: Properties
foaf:name
a rdf:Property, owl:DatatypeProperty ;
rdfs:label "name" ;
rdfs:comment "A name for some thing." ;
rdfs:domain owl:Thing ;
rdfs:range rdfs:Literal ;
rdfs:subPropertyOf rdfs:label ;
rdfs:isDefinedBy <http://xmlns.com/foaf/0.1/> .
69. Some Terms to Define Terms
• rdf:type (or just a in Turtle)
– special property to say what kind of a thing ("class") a
resource is
• rdfs:label, rdfs:comment
– documentation for humans
• rdfs:Class, owl:Class
– this term is a class
• rdfs:Property, owl:DatatypeProperty,
owl:ObjectProperty
– this term is a property, special kind of property
70. Some Terms to Define Terms
• rdfs:subClassOf
– defining class
hierarchies
• rdfs:subPropertyOf
– defining property
hierarchies
• rdfs:definedBy
– where is this term
defined, where can I get
the specification?
71. Reuse things: Datasets
• GeoNames: Geographical data
• DBPedia: RDF version of Wikipedia (also in
Dutch)
• GTAA: (Gemeenschappelijke Thesaurus
Audiovisuele Archieven): Persons, topics, AV-
terms
• VIAF: Persons
rijks:Painting001 http: //sws.geonames.org/2759794/
http://purl.org/dc/terms/spatial
72. Write down your graph from hands on
session 1 in RDF Turtle
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix example: <http://purl.org/collections/example/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
example:knud
a foaf:Person ;
foaf:name “Knud Möller”^^xsd:string ;
foaf:knows example:victor ;
foaf:topic_interest example:linked_data .
example:victor
a foaf:Person ;
foaf:name “Victor de Boer”^^xsd:string ;
foaf:knows example:knud ;
foaf:topic_interest example:linked_data ;
foaf:knows example:truffel .
75. Rules of Linked Data
1. Use HTTP URIs so that these things can be referred to
and looked up ("dereference") by people and user
agents
2. Provide useful information (i.e., a structured
description - metadata) about the thing when its URI
is dereferenced.
3. Include links to other, related URIs in the exposed
data to improve discovery of other related
information on the Web.
www.w3.org/DesignIssues/LinkedData.html
76. So that means that
When I ask for a URI
dbpedia:Kuching
I want some data back, describing that resource
Let’s see: http://dbpedia.org/resource/Kuching
77. Content negotiation
Reply based on preference expressed in HTTP request
response header (Accept:)
GET /resource/Amsterdam HTTP/1.1
Host: dbpedia.org
Accept: text/html;q=0.5, application/rdf+xml
I’m ok with HTML… …but I really prefer RDF
78. text/html
body onload="init();" about="dbpedia:Amsterdam">
<div id="header">
<div id="hd_l">
<h1 id="title">About: <a href="dbpedia:Amsterdam">Amsterdam</a></h1>
<div id="homelink">
<!--?vsp if (white_page = 0) http (txt); ?-->
</div>
<div class="page-resource-uri">
An Entity of Type : <a href="http://dbpedia.org/ontology/City">city</a>,
from Named Graph : <a href="http://dbpedia.org">http://dbpedia.org</a>,
within Data Space : <a href="http://dbpedia.org">dbpedia.org</a>
</div>
</div> <!-- hd_l -->
<div id="hd_r">
<a href="http://wiki.dbpedia.org/Imprint" title="About DBpedia">
<img src="/statics/dbpedia_logo.png" height="64" alt="About DBpedia"/>
</a>
</div> <!-- hd_r -->
</div> <!-- header -->
<div id="content">
<p>Amsterdam is de hoofdstad en grootste gemeente van Nederland. De stad, in het Amsterdams ook Mokum genoemd, ligt
in de provincie Noord-Holland, aan de monding van de Amstel en aan het IJ. De naam van de stad komt van de ligging bij een
in de 13e eeuw aangelegde dam in de Amstel. De plaats kreeg stadsrechten rond 1300 en groeide tot één van de grootste
handelssteden ter wereld in de Gouden Eeuw.</p>
84. Recipes for publishing Linked Data
1. Serving Linked Data as Static RDF/XML Files
2. Serving Linked Data as RDF Embedded in HTML Files
3. Serving RDF and HTML with Custom Server-Side Scripts
4. Serving Linked Data from Relational Databases
5. Serving Linked Data by Wrapping Existing Application or Web APIs
6. Serving Linked Data from RDF Triple Stores w/ deferencing
Tom Heath, Chris Bizer http://linkeddatabook.com/
85. 1. Serving Linked Data as Static RDF/XML Files
• “Just” host a .rdf file on your server, describing
all of your RDF
– Include correct MIME type
86. 2. Serving Linked Data as RDF
Embedded in HTML Files (RDFa)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN"
"http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
version="XHTML+RDFa 1.0" xml:lang="en">
<head>
<title>John's Home Page</title>
<base href="http://example.org/john-d/" />
<meta property="dc:creator" content="Jonathan Doe" />
<link rel="foaf:primaryTopic" href="http://example.org/john-d/#me" />
</head>
<body about="http://example.org/john-d/#me">
<h1>John's Home Page</h1>
<p>My name is <span property="foaf:nick">John D</span> and I like
<a href="http://www.neubauten.org/" rel="foaf:interest"
xml:lang="de">Einstürzende Neubauten</a>.
</p>
</body>
</html>
87. 3. Serving RDF and HTML with Custom
Server-Side Scripts
Data
Scripts serving
RDF
Scripts serving
html web pages
Client request
Content negotiation script
• PHP (ARC)
• Any other server-
side scripting
language
88. 4. Serving Linked Data from Relational Databases
Some software mapping
relational database
tables to triples
D2R, Triplify, Virtuoso
Tom Heath, Chris Bizer http://linkeddatabook.com/
D2R
94. CPACKs
• Amalgame for vocabulary alignment
• XMLRDF for converting XML to RDF
• Prepackaged data and metadata sets
– Provenance, SKOS, etc.
• UI packages
– For specific web applications
99. Handson: Install Cliopatria
1. Download Swi-prolog (http://swi-prolog.org) -> dev.release
2. Install Swi-prolog
3. Install GIT
4. Install Cliopatria using GIT -> https://cliopatria.swi-
prolog.org/help/Download.html
1. In your target dir: (C:/myStuff) do
2. > git clone https://github.com/ClioPatria/ClioPatria.git
5. Create a project
1. Create a project dir (can be where you want) (C:/myStuff/myTripleStore)
2. For windows, in your Cliopatria dir: run setup.pl
3. Make new project in your new dir (C:/myStuff/myTripleStore)
6. Open your project C:/myStuff/myTripleStore/run.pl
7. direct your browser to http://localhost:3020
1. First time: make an admin passwd
100. • In the UI, use Resource->Load local file, select
your file and choose a Named Graph URI
(default=filename)
• If no errors, view basic statistics in Places->
Graphs
– Add more graphs if needed
• Alternatively, you can use ‘ load_rdf(“file”). ‘ in
the prolog prompt
Handson: load your files
http://victordeboer.com/foaf.rdf
101. • Load remote vocabularies using LOD dereferencing (!)
– View your predicates in Places->graphs->predicates
– The blue resources are known (they have triples about
them
– Red resources are unknown resources
– Query the Linked Data Cloud for that resource. Look at the
results in Places-> Graphs
Handson: load remote files
104. Three main ways of accessing remote
Linked Data
1. Through HTTP request on the resource URI
2. Through SPARQL queries
3. Through Linked Data Fragments
4. Get a copy of a dataset
105. 1. Through HTTP request on the
resource URI
• HTTP GET on resource, parse, follow links
– Simple HTTP requests and RDF parsing
– Requires dereferencable URIs
– One request per resource: may require many
requests
• Local caching can be done
• Crawling GET /resource/Amsterdam HTTP/1.1
Host: dbpedia.org
Accept: text/html;q=0.5, application/rdf+xml
I’m ok with HTML… …but I really prefer RDF
107. 2. Get a local copy of a dataset
• through SPARQL CONSTRUCT,
• crawling or
• direct file download
• Save in triple store
– or convert to something else
112. SPARQL – Querying the Web of Data
• query language for RDF graphs (i.e., linked
data)
• extract specific information out of a dataset
(or several datasets)
• "The SQL for the Web of Data"