This document summarizes a keynote presentation on linking media and data using Apache Marmotta. It discusses the motivation for semantic media asset management using the Red Bull Content Pool as an example. It provides background on linked media principles, media fragments and ontology, and the linked media framework implementation. It then demonstrates use cases for the Red Bull content pool and ConnectMe project. It introduces the Linked Data Platform standard and Apache Marmotta implementation. Finally, it discusses querying multimedia fragments using SPARQL-MM.
The importance of Linked Media to the Future WebLinkedTV
If the future Web will be able to fully leverage the scale and quality of online media, a Web scale layer of structured, interlinked media annotations is needed, which we will call Linked Media, inspired by the Linked Data movement for making structured, interlinked descriptions of resources better available online. Mobile and tablet devices, as well as connected TVs, introduce novel application domains that will benefit from broad understanding and acceptance of Linked Media standards. In this talk, I will provide an overview of current practices and specification efforts in the domain of video and Web content integration, drawing from the LinkedTV and MediaMixer projects. From this, I will present a vision for a Linked Media layer on the future Web will can empower new media-centric applications in a world of ubiquitous online multimedia.
IEEE ISM 2008: Kalman Graffi: A Distributed Platform for Multimedia CommunitiesKalman Graffi
Online community platforms and multimedia content delivery are merging in recent years. Current platforms like Facebook and YouTube are client-server based which result in high administration costs for the provider. In contrast to that peer-to-peer systems offer scalability and low costs, but are limited in their functionality. In this paper we present a framework for peer-to-peer based multimedia online communities.We identified the key challenges for this new application of the peer-to-peer paradigm and built a plugin based, easily extendible and multifunctional framework. Further, we identified distributed linked lists as valuable data structure to implement the user profiles, friend lists, groups, photo albums and more. Our framework aims at providing the functionality of common online community platforms combined with the multimedia delivery capabilities of modern peer-to-peer systems, e.g. direct multimedia delivery and access to a distributed multimedia pool.
The document provides an introduction to Dublin Core metadata, including:
1) Dublin Core is a set of metadata standards including 15 simple elements and over 50 qualified elements for describing resources.
2) Dublin Core metadata can be used to improve resource discovery and is recommended for metadata harvesting and the semantic web.
3) Custom mappings can be made from other metadata standards like LOM to the Dublin Core Abstract Model to make metadata interoperable.
The document discusses the BBC's use of linked data to connect content around topics relevant to audiences. It describes the key components of the BBC's linked data platform, including triplestores, the linked data platform, APIs, and shared libraries. It also touches on data quality, complexity challenges, and principles for being a good linked open data citizen such as clear ownership and focusing on audience needs.
The Eprints Application Profile: a FRBR approach to modelling repository meta...Julie Allinson
Julie Allinson, Pete Johnston and Andy Powell, UKOLN, University of Bath, present recent work on developing a Dublin Core Application Profile (DCAP) for describing "scholarly publications" (eprints). They will explain why the Dublin Core Abstract Model is well suited to creating descriptions based on entity-relational models such as the FRBR-based (Functional Requirements for Bibliographic Records) Eprints data model. The ePrints DCAP highlights the relational nature of the model underpinning Dublin Core and illustrates that the Dublin Core Abstract Model can support the representation of complex data describing multiple entities and their relationships.
Open Source project failure often stems from not setting clear objectives or having a shared vision from the start. That said there are many success stories, including two well known Statistical examples: Demetra; and Eurostat SDMX tools (SDMX-RI). However, in all these examples there was at first a founding organisation/entity that created the right environment for its successful path into a new paradigm. In the context of my presentation this being the Statistical Information System Collaboration Community (SIS-CC / http://siscc.oecd.org).
Presented at the International Marketing and Output DataBase Conference, Gozd Martuljek, September 18 - 22, 2016.
A quick presentation to talk about the benefits of structured knowledge, focused on parallax & freebase, and how their knowledge representation fits into the wider scope of the semantic web.
- The document discusses the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) standard which allows interoperability between digital archives and repositories.
- It describes key aspects of the OAI-PMH standard including verbs, identifiers, sets, data and service providers, and harvesting metadata from multiple sources.
- The document also provides an example of implementing OAI-PMH through the CulturaItalia project in Italy which aggregates metadata about artworks in Tuscany from different source repositories.
This document discusses work package 4 (WP4) of a project which involves developing extensions to existing collaboration tools to support diversity. It outlines two tasks: T4.1 which involves developing extensions for tools like WordPress and MediaWiki over three years, and T4.2 which involves producing best practice documents. It then provides more details on the planned extensions for WordPress and MediaWiki, and brainstorms potential scenarios for applying the extensions, such as adding links to related but different opinions in blogs, and checking biases in sources for wiki articles.
The document provides an overview of metadata and how it can be used. It discusses different types of metadata including structural, administrative, and descriptive metadata. It also covers how to create metadata by determining content types and attributes, and identifying functionality. Standards like Dublin Core, RDF/RDFa and Schema.org are examined as sources for metadata fields. The workshop teaches best practices for applying metadata to improve search, browsing and other functions.
The Semantic Web: What IAs Need to Know About Web 3.0Chiara Fox Ogan
The document discusses the Semantic Web and Web 3.0. It defines the Semantic Web as an extension of the current web that makes data on the web more accessible to machines. It explains key concepts needed to realize the Semantic Web like identifying resources with URIs, linking data using RDF triples, using ontologies to define relationships between concepts, and sharing structured data and ontologies. The document provides examples of how semantics are already being used in applications today and how semantics can improve search and allow new types of questions to be asked of linked data.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
5 steps to becoming a JISC IE content providerAndy Powell
The document outlines steps for content providers to become integrated information environment (IE) providers, including exposing metadata through Z39.50 and OAI-PMH, sharing news/alerts via RSS, becoming an OpenURL source and target, and using persistent identifiers. It discusses technical requirements like supporting machine-to-machine interfaces and authentication, as well as common metadata practices to allow discovery and access across collections. The goal is a more coherent information environment where end-users can discover and access resources across multiple content providers through portals and other services.
Technical integration of data repositories status and challengesvty
This document discusses technical integration of data repositories, including:
- Previous integration initiatives focused on metadata integration using OAI-PMH and ResourceSync protocols, as well as aggregators like OpenAIRE.
- Challenges to integration include different levels of software/service maturity, maintenance of distributed applications, and use of common standards and vocabularies.
- Potential integration efforts could focus on improving FAIRness, metadata/data flexibility, and connections between repositories, software, and computing resources to better enable reuse of EOSC data and services.
This document provides an introduction and overview of MPEG-21. MPEG-21 is an open framework for multimedia delivery and consumption that focuses on content creators and consumers. It aims to define the technology needed to support users in efficiently exchanging, accessing, consuming, trading, and manipulating digital items in an interoperable way. MPEG-21 is structured into multiple parts that cover areas like digital item declaration, identification, intellectual property management and protection, and rights expression.
PwC is a global network of firms providing assurance, tax, and advisory services. This training module covers best practices for designing and developing RDF vocabularies. It discusses modeling data by reusing existing vocabularies when possible, creating sub-classes and properties to specialize existing terms, and defining new terms following common conventions when needed. The module also addresses publishing and promoting vocabularies so they can be reused by others.
interoperability: the value of recombinant potentiallisld
Interoperability allows for the combining and reuse of resources and data across systems through standards and protocols. It provides economic value by maximizing the use of investments in metadata and content when they can be shared, reused, and recombined. For users, interoperability reduces the technical barriers to accessing and using resources, allowing them to focus on their work.
This module supported the training on Linked Open Data delivered to the EU Institutions on 30 November 2015 in Brussels. https://joinup.ec.europa.eu/community/ods/news/ods-onsite-training-european-commission
Enabling access to Linked Media with SPARQL-MMThomas Kurz
The amount of audio, video and image data on the web is immensely growing, which leads to data management problems based on the hidden character of multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the document web and the Web of Data has become a common practice and is known as Linked Media. However, the value of connecting media to its semantic meta data is limited due to lacking access methods specialized for media assets and fragments as well as to the variety of used description models. With SPARQL-MM we extend SPARQL, the standard query language for the Semantic Web with media specific concepts and functions to unify the access to Linked Media. In this paper we describe the motivation for SPARQL-MM, present the State of the Art of Linked Media description formats and Multimedia query languages, and outline the specification and implementation of the SPARQL-MM function set.
Sergio Fernández gave a presentation on Marmotta, an open platform for linked data. He discussed Marmotta's main features like supporting read-write linked data and SPARQL/LDPath querying. He also covered Marmotta's architecture, timeline including joining the Apache incubator in 2012, and how its team of 11 committers from 6 organizations work using the Apache Way process. Fernández encouraged participation to help contribute code and documentation to the project.
Redlink opens the door to the world of semantics by providing simple Restful APIs, SDKs and Plugins for the most common use cases. Existing CMS can thus seamlessly integrate semantic technologies. The slides also shows how MM Asset Management Systems can profit from Semantic Lifting.
Semantic Media Management with Apache MarmottaThomas Kurz
Thomas Kurz gives a presentation on semantic media management using Apache Marmotta. He plans to create a new Marmotta module that supports storing images, annotating image fragments, and retrieving images and fragments based on annotations. This will make use of linked data platform, media fragment URIs, open annotation model, and SPARQL-MM. The goal is to create a Marmotta module and webapp that extends LDP for image fragments and provides a UI for image annotation and retrieval.
Gene Wiki and Wikimedia Foundation SPARQL workshopBenjamin Good
This document summarizes a presentation about curating biomedical knowledge on Wikidata and Wikipedia through the Gene Wiki project. The Gene Wiki project develops tools and resources to automatically generate gene pages on Wikipedia using structured data from Wikidata. This centralized biomedical knowledge on open platforms and allows the data to be queried through SPARQL, powering new applications for biomedical research.
Introduction to Apache Beam (incubating) - DataCamp Salzburg - 7 dec 2016Sergio Fernández
This document provides an introduction to Apache Beam, a unified programming model for batch and stream data processing. It discusses Beam's programming model including PCollections, transforms, and runners. It also provides examples of writing a basic Beam pipeline in Java and running it on the Direct, Spark, and Flink runners.
This document provides an overview of a training course on RDF, SPARQL and semantic repositories. The training course took place in August 2010 in Montreal as part of the 3rd GATE training course. The document outlines the modules covered in the course, including introductions to RDF/S and OWL semantics, querying RDF data with SPARQL, semantic repositories and benchmarking triplestores.
- The document discusses the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) standard which allows interoperability between digital archives and repositories.
- It describes key aspects of the OAI-PMH standard including verbs, identifiers, sets, data and service providers, and harvesting metadata from multiple sources.
- The document also provides an example of implementing OAI-PMH through the CulturaItalia project in Italy which aggregates metadata about artworks in Tuscany from different source repositories.
This document discusses work package 4 (WP4) of a project which involves developing extensions to existing collaboration tools to support diversity. It outlines two tasks: T4.1 which involves developing extensions for tools like WordPress and MediaWiki over three years, and T4.2 which involves producing best practice documents. It then provides more details on the planned extensions for WordPress and MediaWiki, and brainstorms potential scenarios for applying the extensions, such as adding links to related but different opinions in blogs, and checking biases in sources for wiki articles.
The document provides an overview of metadata and how it can be used. It discusses different types of metadata including structural, administrative, and descriptive metadata. It also covers how to create metadata by determining content types and attributes, and identifying functionality. Standards like Dublin Core, RDF/RDFa and Schema.org are examined as sources for metadata fields. The workshop teaches best practices for applying metadata to improve search, browsing and other functions.
The Semantic Web: What IAs Need to Know About Web 3.0Chiara Fox Ogan
The document discusses the Semantic Web and Web 3.0. It defines the Semantic Web as an extension of the current web that makes data on the web more accessible to machines. It explains key concepts needed to realize the Semantic Web like identifying resources with URIs, linking data using RDF triples, using ontologies to define relationships between concepts, and sharing structured data and ontologies. The document provides examples of how semantics are already being used in applications today and how semantics can improve search and allow new types of questions to be asked of linked data.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
5 steps to becoming a JISC IE content providerAndy Powell
The document outlines steps for content providers to become integrated information environment (IE) providers, including exposing metadata through Z39.50 and OAI-PMH, sharing news/alerts via RSS, becoming an OpenURL source and target, and using persistent identifiers. It discusses technical requirements like supporting machine-to-machine interfaces and authentication, as well as common metadata practices to allow discovery and access across collections. The goal is a more coherent information environment where end-users can discover and access resources across multiple content providers through portals and other services.
Technical integration of data repositories status and challengesvty
This document discusses technical integration of data repositories, including:
- Previous integration initiatives focused on metadata integration using OAI-PMH and ResourceSync protocols, as well as aggregators like OpenAIRE.
- Challenges to integration include different levels of software/service maturity, maintenance of distributed applications, and use of common standards and vocabularies.
- Potential integration efforts could focus on improving FAIRness, metadata/data flexibility, and connections between repositories, software, and computing resources to better enable reuse of EOSC data and services.
This document provides an introduction and overview of MPEG-21. MPEG-21 is an open framework for multimedia delivery and consumption that focuses on content creators and consumers. It aims to define the technology needed to support users in efficiently exchanging, accessing, consuming, trading, and manipulating digital items in an interoperable way. MPEG-21 is structured into multiple parts that cover areas like digital item declaration, identification, intellectual property management and protection, and rights expression.
PwC is a global network of firms providing assurance, tax, and advisory services. This training module covers best practices for designing and developing RDF vocabularies. It discusses modeling data by reusing existing vocabularies when possible, creating sub-classes and properties to specialize existing terms, and defining new terms following common conventions when needed. The module also addresses publishing and promoting vocabularies so they can be reused by others.
interoperability: the value of recombinant potentiallisld
Interoperability allows for the combining and reuse of resources and data across systems through standards and protocols. It provides economic value by maximizing the use of investments in metadata and content when they can be shared, reused, and recombined. For users, interoperability reduces the technical barriers to accessing and using resources, allowing them to focus on their work.
This module supported the training on Linked Open Data delivered to the EU Institutions on 30 November 2015 in Brussels. https://joinup.ec.europa.eu/community/ods/news/ods-onsite-training-european-commission
Enabling access to Linked Media with SPARQL-MMThomas Kurz
The amount of audio, video and image data on the web is immensely growing, which leads to data management problems based on the hidden character of multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the document web and the Web of Data has become a common practice and is known as Linked Media. However, the value of connecting media to its semantic meta data is limited due to lacking access methods specialized for media assets and fragments as well as to the variety of used description models. With SPARQL-MM we extend SPARQL, the standard query language for the Semantic Web with media specific concepts and functions to unify the access to Linked Media. In this paper we describe the motivation for SPARQL-MM, present the State of the Art of Linked Media description formats and Multimedia query languages, and outline the specification and implementation of the SPARQL-MM function set.
Sergio Fernández gave a presentation on Marmotta, an open platform for linked data. He discussed Marmotta's main features like supporting read-write linked data and SPARQL/LDPath querying. He also covered Marmotta's architecture, timeline including joining the Apache incubator in 2012, and how its team of 11 committers from 6 organizations work using the Apache Way process. Fernández encouraged participation to help contribute code and documentation to the project.
Redlink opens the door to the world of semantics by providing simple Restful APIs, SDKs and Plugins for the most common use cases. Existing CMS can thus seamlessly integrate semantic technologies. The slides also shows how MM Asset Management Systems can profit from Semantic Lifting.
Semantic Media Management with Apache MarmottaThomas Kurz
Thomas Kurz gives a presentation on semantic media management using Apache Marmotta. He plans to create a new Marmotta module that supports storing images, annotating image fragments, and retrieving images and fragments based on annotations. This will make use of linked data platform, media fragment URIs, open annotation model, and SPARQL-MM. The goal is to create a Marmotta module and webapp that extends LDP for image fragments and provides a UI for image annotation and retrieval.
Gene Wiki and Wikimedia Foundation SPARQL workshopBenjamin Good
This document summarizes a presentation about curating biomedical knowledge on Wikidata and Wikipedia through the Gene Wiki project. The Gene Wiki project develops tools and resources to automatically generate gene pages on Wikipedia using structured data from Wikidata. This centralized biomedical knowledge on open platforms and allows the data to be queried through SPARQL, powering new applications for biomedical research.
Introduction to Apache Beam (incubating) - DataCamp Salzburg - 7 dec 2016Sergio Fernández
This document provides an introduction to Apache Beam, a unified programming model for batch and stream data processing. It discusses Beam's programming model including PCollections, transforms, and runners. It also provides examples of writing a basic Beam pipeline in Java and running it on the Direct, Spark, and Flink runners.
This document provides an overview of a training course on RDF, SPARQL and semantic repositories. The training course took place in August 2010 in Montreal as part of the 3rd GATE training course. The document outlines the modules covered in the course, including introductions to RDF/S and OWL semantics, querying RDF data with SPARQL, semantic repositories and benchmarking triplestores.
The document introduces the concept of Linked Data and discusses how it can be used to publish structured data on the web by connecting data from different sources. It explains the principles of Linked Data, including using HTTP URIs to identify things, providing useful information when URIs are dereferenced, and including links to other URIs to enable discovery of related data. Examples of existing Linked Data datasets and applications that consume Linked Data are also presented.
This presentation provides an overview on Linked Data, its underlying principles and applications. It further discusses benefits and business models for enterprises;
Held at the Tiroler IT Tag 2010
ConnectME: connecting content for future TV & videoconnectme_project
Today, media material is increasingly digital and shifting to delivery via IP and the Web,
including cultural artifacts or broadcast television. This opens up the possibility of new
services deriving added value from such material by combining it with other material
elsewhere on the Web which is related to it or enhances it in a meaningful way, to the
benefit of the owner of the original content, the providers of the content enhancing it and
the end consumer who can access and interact with these new services. Since the
services are built around providing new experiences through connecting different related
media together, we consider such services to be Connected Media Experiences
(ConnectME).
In particular, much rich depth of information and service functionality associated to
content in video is not derived today due to a lack of suitably granular description of
video, including linking of video objects to the concepts they represent. For example,
news reports about local tourism and events are not linked to tourist information and
event–related services which a viewer may (spontaneously) wish to access and make
use of.
The technological result of the ConnectME project will be an end-to-end service platform
to host those added-value services over different networks, providing the common
required functionality of each service: multimedia annotation and subsequent enrichment
with related content from the Web, combined with the packaging and delivery of
synchronized multimedia presentations to the end device. At the device, intuitive user
interfaces must be developed so that the selection of on-screen objects and the
browsing of the associated content can be done in a non-disruptive and intuitive fashion.
As a result, ConnectME facilitates a new interactive media experience built on top of the
convergence of TV/video and the Web.
Linked Open Data combines open data and linked data by making open data available on the web in a way that is machine-readable and semantically interlinked. It uses URIs and RDF to identify things and their properties and relationships, and links data from different sources to enable discovery of related data. Publishing and consuming Linked Open Data allows data sharing and integration to create new knowledge and applications. Key steps involve identifying, cleaning, and publishing data as RDF while linking it to other datasets, then consuming and combining it with other sources. Major Linked Open Data sources include data from governments, Wikipedia, and other organizations.
The document provides methodological guidelines for publishing linked data. It introduces linked data and its key principles of using URIs, HTTP URIs, providing useful information through standards like RDF and SPARQL, and including links between data. The rest of the document outlines guidelines for publishing linked data, including identifying data sources, modeling vocabularies by reusing existing ones, generating RDF data from sources, generating URIs, publishing and linking the RDF data, enabling discovery through mechanisms like CKAN and Sitemaps, and tools that can help with each step of the process.
This document provides an overview of linked data and the semantic web. It discusses moving from a web of documents to a web of data by making data on the web more structured and interconnected. The key aspects covered include using URIs to identify things, providing structured data about those things via standards like RDF, and including links to other related data to improve discovery. The document also explains some of the core technologies involved like RDF, RDF syntaxes, vocabularies for describing data, and publishing and accessing linked data on the web.
NoTube: experimenting with Linked Data to improve user experienceMODUL Technology GmbH
Vicky Buser presented on the BBC's NoTube project, which uses linked data to improve the user experience of television and online content. Some key ways linked data can help include providing personalized recommendations to help users decide what to watch, offering additional context about programs by semantically enriching metadata, and building infrastructure to support social sharing and discussions around video content. The BBC has applied these techniques in several experiments and applications, and the presentation outlined open challenges and future directions for this work.
This document provides an overview of a tutorial on Linked Data for the Humanities. The tutorial covers Linked Data basics such as its history and building blocks, including URIs, HTTP, RDF, and SPARQL. It also discusses producing and consuming Linked Data, as well as hybrid methods. The tutorial aims to help participants understand URI resolution, experience graph traversal, and grasp content negotiation through hands-on exercises using tools like cURL.
Semantics on services: the story so far (SALAD2015 keynote at ESWC2015)Sergio Fernández
Sergio Fernández gave a presentation on services and applications over linked APIs and data. He discussed the evolution of semantic web services and standards like WSMO, OWL-S, and SAWSDL. He also covered REST and how linked data platforms like Hydra and the Linked Data Platform specification are influencing the development of interoperable web APIs. Fernández argued that microservice architectures will be important for building scalable services and that evaluation should involve practical testing against real-world problems.
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Remixing Media on the Semantic Web (ISWC2014 Tutorial) Pt 2 Linked Media: An...LinkedTV
The second session looks at how using Linked Data principles for media fragment annotation publication and retrieval (Linked Media) can enable online media fragment re-use:
Introducing the Linked Media principles
Publishing Linked Media using dedicated multimedia RDF repositories
Retrieval of media resources that illustrate linked data concepts
Using the Linked Data graph to find relevant links between distinct media assets (examples with SPARQL)
Retrieval of links between annotated media to enable topical browsing (using the TVEnricher service)
Examples of Linked Media at scale: VideoLyzard and HyperTED
Linked Data Platform specification aims to define a set of HTTP protocol extensions for accessing, updating, creating and deleting resources from servers that expose their resources as Linked Data. This presentation looks at how the Linked Data Platform can be used for application integration.
This document discusses the evolution of the web from a web of documents to a web of linked data. It outlines the principles of linked data, which involve using URIs to identify things and linking those URIs to other URIs so that machines can discover more data. RDF is introduced as a standard data model for publishing linked data on the web using triples. Examples of linked data applications and datasets are provided to illustrate how linked data allows the web to function as a global database.
The document summarizes key concepts about linked data and the semantic web. It discusses how linked data uses URIs and RDF to publish structured data on the web in a way that is machine-readable and interconnected. It provides examples of how linked data is being implemented in projects from the UK government and BBC to link disparate data sources on the web. While progress is being made, challenges remain around getting organizations to publish their data as linked open data and proving the business value of doing so.
Media Fragments Indexing using Social MediaLinkedTV
With more and more video shared on the Web, the practice of sharing a video object from a certain time point (deep-linking) has been implemented by many video sharing platforms. With so many media fragments created, annotated and shared, however, indexing video objects on a fine-grained level on the Web scale is still not implemented by major search engines. To solve this problem, this paper proposes Twitter Media Fragment Indexer, which monitors the Tweet text and uses the embedded URLs pointing to video fragments as the media to massively create index for media fragments. Some preliminary evaluation has shown that media fragments can be successfully indexed in large scale using this system.
This is a presentation from the LIME workshop at ESWC2014.
Linked data demystified:Practical efforts to transform CONTENTDM metadata int...Cory Lampert
This document outlines a presentation about transforming metadata from a CONTENTdm digital collection into linked data. It discusses the concepts of linked data, including defining linked data, linked data principles, technologies and standards. It then explains how these concepts can be applied to digital collection records, including anticipated challenges working with CONTENTdm. The document describes a linked data project at UNLV Libraries to transform collection records into linked data and publish it on the linked data cloud. It provides tips for creating metadata that is more suitable for linked data.
The document provides a summary of the LinkedTV project, which aims to seamlessly integrate television and web content. Key points:
- LinkedTV allows viewers to access background information, identify artists/museums from TV shows, and personalize the experience.
- It provides tools for automatic content analysis, enrichment with web data, an editor interface, and companion apps.
- A workflow enriches TV programs with metadata, stores it, and provides access via apps. Two apps were developed with broadcasters.
- The project concludes after 42 months, providing an end-to-end platform and tools to link TV and web content across devices.
LinkedTV Deliverable 9.3 Final LinkedTV Project ReportLinkedTV
This document comprises the final report of LinkedTV. It includes a publishable summary of the project's scientific results and technological outcomes, a plan for use and dissemination of foreground IP and a list of dissemination activities (publications and events)
LinkedTV Deliverable 6.5 - Final evaluation of the LinkedTV ScenariosLinkedTV
The deliverable presents the results of evaluating the final
scenario demonstrators LinkedNews and LinkedCulture in the LinkedTV project. We tested specifically user satisfaction with the enriched TV experience we enabled for cultural heritage and news TV programs. We also supported the evaluation of other aspects of the LinkedTV technologies in the trials, specifically the personalization and content curation.
LinkedTV Deliverable 5.7 - Validation of the LinkedTV ArchitectureLinkedTV
The LinkedTV architecture lays the foundation for the
LinkedTV system. It consists of the integrating platform for the end-to-end functionality, the backend components and the supporting client components. Since the architecture of a software system has a fundamental impact on quality
attributes, it is important to evaluate its design. The document at hand reports on the validation of the LinkedTV architecture.
LinkedTV Deliverable 4.7 - Contextualisation and personalisation evaluation a...LinkedTV
This deliverable covers all the aspects of evaluation of the overall LinkedTV personalization workflow, as well as re-evaluations of techniques where newer technology and / or algorithmic capacity offer new insight into the general performance. The implicit contextualized personalization workflow, the implicit uncontextualized workflow in the premises of the final LinkedTV application, the advances
in context tracking given new technologies emerged and the outlook of video recommendation beyond LinkedTV is measured and analyzed in this document.
LinkedTV Deliverable 3.8 - Design guideline document for concept-based presen...LinkedTV
This document presents the results of a user study conducted to determine guidelines for selecting relevant entities from news videos to provide additional information about. The study identified entities users find most interesting from five news videos by extracting candidate entities from various sources and having participants rate them. The results showed users prefer person and organization entities over locations and sources like subtitles alone are insufficient, performing better when combined with expert suggestions or related articles. Wikipedia was found to provide generally useful additional information about the entities. Engineering guidelines are also provided for presenting aggregated web content in news companion applications.
LinkedTV Deliverable 2.7 - Final Linked Media Layer and EvaluationLinkedTV
This deliverable presents the evaluation of content annotation and content enrichment systems that are part of the final tool set developed within the LinkedTV consortium. The evaluations were performed on both the Linked News and Linked Culture trial content, as well as on other content annotated for this purpose. The evaluation spans three languages: German (Linked News), Dutch (Linked
Culture) and English. Selected algorithms and tools were also subject to benchmarking in two international contests: MediaEval 2014 and TAC’14. Additionally, the Microposts 2015 NEEL Challenge is being organized with the support of LinkedTV.
LinkedTV Deliverable 1.6 - Intelligent hypervideo analysis evaluation, final ...LinkedTV
This deliverable describes the conducted evaluation activities for assessing the performance of a number of developed methods for intelligent hypervideo analysis and the usability of the implemented Editor Tool for supporting video annotation and enrichment. Based on the performance evaluations reported in D1.4 regarding a set of LinkedTV analysis components, we extended our experiments for assessing the effectiveness of newer versions of these methods as well as of entirely new techniques, concerning the accuracy and the time efficiency
of the analysis. For this purpose, in-house experiments and participations at international benchmarking activities were made, and the outcomes are reported in this deliverable. Moreover, we present the results of user trials regarding the developed Editor Tool, where groups of experts assessed its usability and the supported functionalities, and
evaluated the usefulness and the accuracy of the implemented video segmentation approaches based on the analysis requirements of the LinkedTV scenarios. By this deliverable we complete the reporting of WP1 evaluations that aimed to assess the efficiency of the developed
multimedia analysis methods throughout the project, according to the analysis requirements of the LinkedTV scenarios.
LinkedTV Deliverable 5.5 - LinkedTV front-end: video player and MediaCanvas A...LinkedTV
The LinkedTV media player and API has evolved from a single player and limited API in version 1 to a toolkit to allow rapid development and creation of different kind of applications within the HTML5 / multiscreen space. The main reason for this transition is that during the course of the Linked TV project different partners had different requirements for their scenarios. Instead of trying to fit all these requirements into one player and, most likely, compromise on the functionalities of the scenarios we wanted to offer something that would allow all partners a satisfiable solution.
Therefore the Springfield Multiscreen Toolkit, or short SMT, has been developed. The aim for the SMT was to allow flexibility for developing multiscreen applications. Also from a commercial point of view a toolkit with examples is more interesting than a pure player as it gives the freedom of developing new ideas with the LinkedTV platform.
LinkedTV - an added value enrichment solution for AV content providersLinkedTV
Linked Television is offering a solution for audiovisual content owners to semi-automatically enrich media with links to additional information and content related to objects and topics in the program and build client applications which access this data and provide new added value services to consumers.
LinkedTV tools for Linked Media applications (LIME 2015 workshop talk)LinkedTV
A brief introduction to tools from the LinkedTV project which can be used together to build new media applications based on conceptual linking of media fragments.
This document provides an overview of the LinkedTV project and its key outputs. The LinkedTV platform enables automatic analysis, annotation and enrichment of TV content with links to related web content. It includes tools for media analysis, annotation and enrichment, as well as an editor tool for human curation. The platform then provides enriched metadata via APIs to power personalized LinkedTV applications on multiple screens. Examples applications described are LinkedNews and LinkedCulture. The document promotes the benefits of LinkedTV for content owners and broadcasters to engage viewers with enriched TV content.
LinkedTV Deliverable D4.6 Contextualisation solution and implementationLinkedTV
This deliverable presents the WP4 contextualisation final im-plementation. As contextualization has a high impact on all the other modules of WP4 (especially personalization and recom-mendation), the deliverable intends to provide a picture of the final WP4 workflow implementation.
LinkedTV Deliverable D3.7 User Interfaces selected and refined (version 2)LinkedTV
This report describes the LinkedTV user interfaces. Based on the results user studies and the initial evaluation of the year 2 prototype we selected and refined the interfaces. We selected a single screen application that uses HbbTV technology to provide additional information about a TV program as an overlay on the TV broadcast. In addition, we worked towards TV program companion applications that are tailored for two domains: news and cultural heritage. With these applications we demonstrate different types of interaction modes, such as synchronized content on a second screen, and bookmarking chapters combined with the exploration of related content after the program. The interfaces are built on top of the Multiscreen Toolkit. We created a component-based infrastructure that allows us to quickly create tailored companion applications by reusing and configuring interface components. In the final part of the project we finalize this approach and test it by applying it to a new domain.
LinkedTV Deliverable D2.6 LinkedTV Framework for Generating Video Enrichments...LinkedTV
This deliverable describes the final LinkedTV framework that provides a set of possible enrichment resources for seed video content using techniques such as text and web mining, information extraction and information retrieval technologies. The enrichment content is obtained from four type of sources: a) by crawling and indexing web sites described in a white list specified by the content partners,
b) by querying the API or SPARQL endpoint of the Europeana digital library network which is publicly exposed, c) by querying multiple social networking APIs, d) by hyperlinking to other parts of TV programs within the same collection using a Solr index. This deliverable
also describes an additional content annotation functionality, namely labelling enrichment (as well as seed) content with thematic topics, as well as the process of exposing content annotations to this module and to the filtering services of LinkedTV’s personalization workflow. We illustrate the enrichment workflow for the two main scenarios of LinkedTV which have lead to the development of the LinkedCulture and LinkedNews applications, which respectively use the TVEnricher and TVNewsEnricher enrichment services. The original title of this deliverable from the DoW was Advanced concept labelling by complementary Web mining.
LinkedTV Deliverable D1.5 The Editor Tool, final release LinkedTV
This document reports on the design and implementation of the final version of the editor tool (ET) v2.0, where its purpose is to serve the program editing teams of broadcasters that have adopted LinkedTV’s interactive television solution into their workflow. Two of these teams are currently represented in the LinkedTV project, namely the RBB team and the AVROTROS team (formerly known as AVRO).
The main purpose of the ET is to provide a means to correct and curate automatically generated annotations and hyperlinks created by the audiovisual and textual analysis technologies developed in WP 1 and 2 of the LinkedTV project. Without the intervention of human editors to correct this data, there is a reasonable risk of exposing inappropriate, incorrect or irrelevant information to the viewers of a LinkedTV interactive broadcast.
LinkedTV Deliverable D1.4 Visual, text and audio information analysis for hyp...LinkedTV
Having extensively evaluated the performance of the technologies included in the first release of WP1 multimedia analysis tools, using content from the LinkedTV scenarios and by participating in international benchmarking activities, concrete decisions regarding the
appropriateness and the importance of each individual method or combination of methods were made, which, combined with an updated list of information needs for each scenario, led to a new set of analysis requirements that had to be addressed through the release of the final set of analysis techniques of WP1. To this end, coordinated efforts on three directions, including
(a) the improvement of a number of methods in terms of accuracy and time efficiency,
(b) the development of new technologies and (c) the definition of synergies between methods for obtaining new types of information via multimodal processing, resulted in the final bunch of multimedia analysis methods for video hyperlinking. Moreover, the different developed analysis modules have been integrated into a web-based infrastructure, allowing the fully automatic linking of the multitude of WP1 technologies and the overall LinkedTV platform.
LinkedTV D8.6 Market and Product Survey for LinkedTV Services and TechnologyLinkedTV
D8.6 presents the results of the market analysis for LinkedTV products and services and consists of
two parts: an overall analysis of current and future
developments in the TV and digital video market and a specific market analysis of potential LinkedTV customers and competitors. Based on the market analysis it was possible to provide a first rough estimation of the LinkedTV market potential and to position LinkedTV on the market.
This deliverable presents the LinkedTV Public Demonstrator which will be an online, publicly accessible Website collecting showcases of the key project outputs which form together our LinkedTV solution: the Editor Tool, Platform and Player, complemented by demonstrations of the provision of this solution for the content of two European broadcasters: the LinkedCulture and LinkedNews scenario demonstrators.
Follow this step-by-step guide to activate and configure your Frontier Unlimited Internet. Get expert setup tips from a reliable Internet service Provider and responsive Frontier Customer Service.
Essential Tech Stack for Effective Shopify Dropshipping Integration.pdfCartCoders
Looking to connect AliExpress or other platforms with your Shopify store? Our Shopify Dropshipping Integration service helps automate orders, manage inventory, and improve delivery time. Start syncing your suppliers and scale your dropshipping business.
5 Reasons cheap WordPress hosting is costing you more | Reversed OutReversed Out Creative
Cheap WordPress hosting may seem budget-friendly, but it often comes with hidden costs like poor performance, security risks, and limited support. This article breaks down the true impact of low-cost hosting and why investing wisely can benefit your website in the long run.
Cloud VPS Provider in India: The Best Hosting Solution for Your BusinessDanaJohnson510230
HeroXhost is a leading Cloud VPS provider in India offering powerful hosting solutions with SSD storage, high-speed performance, and 24/7 support. It provides flexible pricing plans suitable for startups, enterprises, and developers.
文凭(UMA毕业证书)马拉加大学毕业证成绩单制作案例【q微1954292140】马拉加大学offer/学位证、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作【q微1954292140】Buy Universidad de Málaga Diploma购买美国毕业证,购买英国毕业证,购买澳洲毕业证,购买加拿大毕业证,以及德国毕业证,购买法国毕业证(q微1954292140)购买荷兰毕业证、购买瑞士毕业证、购买日本毕业证、购买韩国毕业证、购买新西兰毕业证、购买新加坡毕业证、购买西班牙毕业证、购买马来西亚毕业证等。包括了本科毕业证,硕士毕业证。
【办理马拉加大学成绩单Buy Universidad de Málaga Transcripts】
购买日韩成绩单、英国大学成绩单、美国大学成绩单、澳洲大学成绩单、加拿大大学成绩单(q微1954292140)新加坡大学成绩单、新西兰大学成绩单、爱尔兰成绩单、西班牙成绩单、德国成绩单。成绩单的意义主要体现在证明学习能力、评估学术背景、展示综合素质、提高录取率,以及是作为留信认证申请材料的一部分。
马拉加大学成绩单能够体现您的的学习能力,包括马拉加大学课程成绩、专业能力、研究能力。(q微1954292140)具体来说,成绩报告单通常包含学生的学习技能与习惯、各科成绩以及老师评语等部分,因此,成绩单不仅是学生学术能力的证明,也是评估学生是否适合某个教育项目的重要依据!
Buy Universidad de Málaga Diploma《正式成绩单论文没过》有文凭却得不到认证。又该怎么办???西班牙毕业证购买,西班牙文凭购买,【q微1954292140】西班牙文凭购买,西班牙文凭定制,西班牙文凭补办。专业在线定制西班牙大学文凭,定做西班牙本科文凭,【q微1954292140】复制西班牙Universidad de Málaga completion letter。在线快速补办西班牙本科毕业证、硕士文凭证书,购买西班牙学位证、马拉加大学Offer,西班牙大学文凭在线购买。
特殊原因导致无法毕业,也可以联系我们帮您办理相关材料:
1:在马拉加大学挂科了,不想读了,成绩不理想怎么办?
2:打算回国了,找工作的时候,需要提供认证《UMA成绩单购买办理马拉加大学毕业证书范本》
购买日韩毕业证、英国大学毕业证、美国大学毕业证、澳洲大学毕业证、加拿大大学毕业证(q微1954292140)新加坡大学毕业证、新西兰大学毕业证、爱尔兰毕业证、西班牙毕业证、德国毕业证,回国证明,留信网认证,留信认证办理,学历认证。从而完成就业。马拉加大学毕业证办理,马拉加大学文凭办理,马拉加大学成绩单办理和真实留信认证、留服认证、马拉加大学学历认证。学院文凭定制,马拉加大学原版文凭补办,成绩单详解细节,扫描件文凭定做,100%文凭复刻。
主营项目:
1、真实教育部国外学历学位认证《西班牙毕业文凭证书快速办理马拉加大学学位证和毕业证的区别》【q微1954292140】《论文没过马拉加大学正式成绩单》,教育部存档,教育部留服网站100%可查.
2、办理UMA毕业证,改成绩单《UMA毕业证明办理马拉加大学学历认证失败怎么办》【Q/WeChat:1954292140】Buy Universidad de Málaga Certificates《正式成绩单论文没过》,马拉加大学Offer、在读证明、学生卡、信封、证明信等全套材料,从防伪到印刷,从水印到钢印烫金,高精仿度跟学校原版100%相同.
3、真实使馆认证(即留学人员回国证明),使馆存档可通过大使馆查询确认.
4、留信网认证,国家专业人才认证中心颁发入库证书,留信网存档可查.
西班牙马拉加大学毕业证(UMA毕业证书)UMA文凭【q微1954292140】高仿真还原西班牙文凭证书和外壳,定制西班牙马拉加大学成绩单和信封。学历认证失败怎么办UMA毕业证【q微1954292140】毕业证工艺详解马拉加大学offer/学位证文凭一模一样、留信官方学历认证(永久存档真实可查)采用学校原版纸张、特殊工艺完全按照原版一比一制作。帮你解决马拉加大学学历学位认证难题。
帮您解决在西班牙马拉加大学未毕业难题(Universidad de Málaga)文凭购买、毕业证购买、大学文凭购买、大学毕业证购买、买文凭、日韩文凭、英国大学文凭、美国大学文凭、澳洲大学文凭、加拿大大学文凭(q微1954292140)新加坡大学文凭、新西兰大学文凭、爱尔兰文凭、西班牙文凭、德国文凭、教育部认证,买毕业证,毕业证购买,买大学文凭,【q微1954292140】学位证1:1完美还原海外各大学毕业材料上的工艺:水印,阴影底纹,钢印LOGO烫金烫银,LOGO烫金烫银复合重叠。文字图案浮雕、激光镭射、紫外荧光、温感、复印防伪等防伪工艺。《马拉加大学2025年新版毕业证书西班牙毕业证书办理UMA录取通知书》
all Practical Project LAST summary note.docxseidjemal94
Linking Media and Data using Apache Marmotta (LIME workshop keynote)
1. Linking Media and Data using
Apache Marmotta
Keynote at LIME 2014 Workshop
Sebastian Schaffert and Thomas Kurz
2. Contents
➔Motivation: The Red Bull Content Pool
➔Background:
➔ Linked Media Principles
➔ Media Fragments and Media Ontology
➔Implementation: Linked Media Framework
➔ Red Bull Use Case
➔ ConnectMe Use Case
➔Standardising: The Linked Data Platform
➔Introducing Apache Marmotta
➔Querying for Multimedia Fragments: SPARQL-MM
2009
2011
2013
2014
5. Motivation: The Red Bull Content Pool
➔ online archive containing video and image material related to
extreme sports events organised by Red Bull
➔ business-to-business portal where journalists can get material for
further broadcasting (mostly for free)
➔ material comes with metadata in the form of tables in word
documents:
➔ interview transcriptions (with time interval start/end second)
➔ scene descriptions (with time interval start/end second)
➔ music cue sheets (copyright information about background
music tracks)
7. Motivation: The Red Bull Content Pool
➔Problems:
➔ videos consist of series of scenes with many different
persons
➔ scanning through a video to find a particular scene is a
huge amount of work
➔ metadata is valuable but not really exploited for searching
videos and while playing videos
8. Can we help Markus?
Name: Markus
Occupation: sports journalist
Company: RegioTV Pinzgau
Objective: create report about cliff diving
Requires: videos, background info, contacts
How can we help Markus?
efficient and precise search in the Red Bull Content Pool
compact and relevant display of background information
contacts (e.g. website,email) of athletes, other journalists, etc.
fast and successful creation of the report
10. Linked Media Principles (2009)
➔ Linked Data is „read-only“
i.e. focus was on publication of big datasets, not the interaction
with data
a system for managing media assets needs to be capable of
updating resources and their metadata
➔ Linked Data is „data-only“
i.e. a resource is represented either as RDF metadata for
machines or as HTML tables for humans, but in all cases it is
metadata and not content
a system for managing media assets needs to be capable of
managing both media content and metadata about that content
11. Linked Media Principles (2009)
➔ extend Linked Data for updates using REST principles (HTTP):
➔ GET: returns a resource (as in Linked Data)
➔ POST: creates a new resource and uploads content or metadata
➔ PUT: updates content or metadata of a resource
➔ DELETE: removes a resource and all associated information
➔ extend Linked Data for arbitrary media formats using MIME:
➔ controlled by Accept: (in case of GET) and Content-Type: (in case of
PUT/POST) HTTP headers
➔ header value: MIME type (e.g. text/turtle or image/jpeg) and type of
relationship (e.g. rel=content or rel=meta)
➔ accessing a resource with GET or PUT redirects to the actual
representation specified by MIME type and relationship
12. Linked Media Principles (2009)
➔ Example 1: Retrieve HTML table representation of resource metadata
➔ Example 2: Retrieve HTML content of resource
➔ Example 3: Update resource metadata
GET http://data.redlink.io/resource/1234
Accept: text/html; rel=meta
GET http://data.redlink.io/resource/1234
Accept: text/html; rel=content
PUT http://data.redlink.io/resource/1234
ContentType: text/turtle; rel=meta
<http://data.redlink.io/resource/1234>
mm:hasFragment <http://data.redlink.io/resource/1234#t=0,10>
14. Media Fragments URI
➔ media content currently treated as „black box binary content“
➔ interaction only via plugin or special browser support
➔ linking to a subsequence of a video not possible
➔ Media Fragments URI: use the „fragment“ part of a URI to
encode temporal and spatial subsequences
➔ Examples:
Identify the sequence from second 3 to second 10 of the video:
http://data.redlink.io/resource/cliff_diving.ogg#t=3,10
Identify the spatial box 320x240 at x=160 and y=120 of the video
http://data.redlink.io/resource/cliff_diving.ogg#xywh=160,120,320,240
15. Ontology for Media Resources
➔ common data model for representing video metadata:
➔ identification
➔ creation (hasCreator, hasPublisher, ...)
➔ content description (hasLanguage, hasGenre, hasKeyword,...)
➔ rights and distribution (hasPermissions, hasTargetAudience, ...)
➔ technical properties (hasCompression, hasFormat, ...)
➔ fragments (hasFragment, hasChapter, ...)
➔ mapping tables from the most popular video metadata formats to
the Ontology for Media Resources (EXIF, MPEG-7, TV-Anytime,
YouTube, ID3)
16. Combining Media Fragments and Media Ontology
➔ use Media Fragment URIs to uniquely identify fragments of
media content
➔ browser compatibility
➔ Linked Data compatibility
➔ use Ontology for Media Resources to describe these fragments
➔ RDF compatibility
➔ rich description graph with SPARQL querying
20. Behind the Scenes: Linked Media Framework
Linked Data Server with updates and uniform management of content and
metadata => particularly well-suited for multimedia content and metadata!
Linked Media Principles for resource-centric access to content and
metadata
SPARQL Query and SPARQL Update 1.1 for structural updating and
querying
Modules for Reasoning, Semantic Search, Linked Data Caching, Versioning,
and Social Media
Specialised on Linked Media and Linked Enterprise Content
Code, Installer, Screencasts and more:
http://code.google.com/p/lmf/
22. LMF Semantic Search
Facetted Search over Content and Metadata with SOLR compatible API
RDF Path Language for configurable Metadata Indexing
Multiple Cores with different configurations to adapt to different search
requirements
23. LMF Reasoning
Rule-based reasoning over triples in the LMF triple store to represent implicit
knowledge
Reason maintenance allows to describe justifications for inferences
adapted version of sKWRL rule language:
more efficient implementation,
improved reason maintenance
24. LMF Linked Data Caching
transparently retrieves linked resources from the Linked Data cloud when needed
(e.g. LD Path or SPARQL query)
powerful component for integrating with other information systems exposing their
data as Linked Media or Linked Data
adapters for services offering their data in proprietary formats (e.g. YouTube, Vimeo,
…)
25. LMF Classification and Sentiment Analysis
support for statistical text classification, allows to train different classifiers with sample
texts for arbitrary categories
suggest most likely category for a text according to similarity with training data
analyse text for positive or negative sentiment (German and English)
25
26. LMF Social Media Integration
allows linking to social media resources, e.g. Facebook or Google accounts, videos,
interests
allows authentication and data import from selected social media services
(Facebook, YouTube, generic RSS)
27. LMF Versioning
keeps history of updates in the Linked Media Framework
provides information for trust and provenance
of data, e.g. annotations added to the system
34. Linked Data Platform: Introduction
➔ recommendation draft of the LDP working group at W3C
➔ support for „read/write Linked Data“
➔ support for RDF and non-RDF resources
➔ can be used as an alternative for Linked Media Principles
➔ advantage of standardisation and wide adoption
➔ considerably more complex standard and protocol
➔ URL: http://www.w3.org/TR/ldp/
35. Linked Data Platform: Concepts
➔ access and interaction according to REST webservice principles
➔ GET: returns description of a resource
➔ POST: creates a new resource
➔ PUT: replaces the description of a resource
➔ DELETE: removes the description of a resource
➔ Linked Data Platform Resources (LDP-R)
➔ RDF resources (LDP-RS): RDF description of a resource
➔ non-RDF resources (LDP-NR): arbitrary (media) content
➔ Linked Data Platform Containers (LDP-C)
➔ collection of LDP resources, e.g. „students“, „professors“, „lectures“
➔ basic container (LDP-BC): simple collection of resources with common URI prefix
➔ direct container (LDP-DC): collection with explicit membership (as triple)
➔ indirect container (LDP-IC): collection with implicit membership (based on content)
36. LDP Basic Containers (LDP-BC)
➔ collection of LDP resources
➔ identification via common URI prefix, e.g.
http://example.com/container1/a
http://example.com/container1/b
➔ can contain both RDF and non-RDF resources at the same time
➔ container is itself an RDF resource
➔ description as RDF:
@base <http://example.com/container1/>
@prefix dcterms: <http://purl.org/dc/terms/>.
@prefix ldp: <http://www.w3.org/ns/ldp#>.
<>
a ldp:BasicContainer;
dcterms:title "A very simple container";
ldp:contains <a>, <b>, <c>.
38. Apache Marmotta
➔ a simplification of the Linked Media Framework taking core
components:
➔ Linked Data Server with SPARQL 1.1
➔ Linked Data Cache
➔ Versioning, Reasoning
➔ no search, no content analysis
➔ reference implementation of the Linked Data Platform and
participation in W3C working group
➔ highly modular and extensible to build custom Linked Data
applications (both client and server)
http://marmotta.apache.org
41. SPARQL-MM: Introduction
➔ extension of SPARQL with specific multimedia functions and
relations, implemented in Apache Marmotta
RelationFunction Aggregation Function
Spatial mm:rightBeside mm:spatialIntersection
mm:spatialOverlaps mm:spatialBoundingBox
… …
Temporal mm:after mm:temporalIntersection
mm:temoralOverlaps mm:temporalIntermediate
… …
Combined mm:overlaps mm:boundingBox
mm:contains mm:intersection
A list of all functions can be found at:
https://github.com/tkurz/sparql-mm/blob/master/sparql-mm/functions.md
42. SPARQL-MM: A sample query
Give me the spatio-temporal snippet that shows Lewis Jones
right beside Connor Macfarlane.
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX mm: <http://linkedmultimedia.org/sparqlmm/functions#>
PREFIX ma: <http://www.w3.org/ns/maont#>
PREFIX dct: <http://purl.org/dc/terms/>
SELECT (mm:boundingBox(?l1,?l2) AS ?two_guys) WHERE {
?f1 ma:locator ?l1; dct:subject ?p1.
?p1 foaf:name "Lewis Jones".
?f2 ma:locator ?l2; dct:subject ?p2.
?p2 foaf:name "Connor Macfarlane".
FILTER mm:rightBeside(?l1,?l2)
FILTER mm:temporalOverlaps(?l1,?l2)
}
46. Conclusions
➔ semantic media asset management requires management and
interaction with both content and metadata
➔ Linked Media Principles (2009) were a first approach to extend
Linked Data with support for semantic media asset
management
➔ Linked Data Platform (W3C working draft) supersedes Linked
Media Principles, as it covers the same aspects and more
➔ semantic media asset management requires specific media
access and querying
➔ Media Fragments URI (W3C) to identify media fragments
➔ Ontology for Media Resources (W3C) to describe media
fragments
➔ SPARQL-MM to query media fragment descriptions