This document provides an introduction to the Semantic Web and RDF (Resource Description Framework). It discusses how the Semantic Web aims to extend the current web by giving data well-defined meaning to enable computers and people to better work together. It introduces RDF as a standard for representing information in the Semantic Web and provides examples of how RDF can be used to represent different types of data, such as relational data and evolving data scenarios.
s developing mash-ups with Web 2.0 really much easier than using Semantic Web technologies? For instance, given a music style as an input, what it takes to retrieve data from online music archives (MusicBrainz, MusicBrainz D2R Server, MusicMoz) and event databases (EVDB)? What to merge them and to let the users explore the results? Are Semantic Web technologies up to this Web 2.0 challenge? This half-day tutorial shows how to realize a Semantic Web Application we named Music Event Explorer or shortly meex (try it!).
The document discusses the history and evolution of the World Wide Web from Web 1.0 to present. It suggests that Web 3.0, also called the Semantic Web, will connect online and offline data through technologies like semantic web, cloud computing, and microformats to allow machines to better understand web pages. Key aspects of Web 3.0 may include fewer dedicated email services, connecting currently separated social networks and data silos, and giving users more control over their online experiences and data through browser-based applications.
Creating Semantic Mashups Bridging Web 2 0 And The Semantic Web Presentation 1jward5519
The document discusses creating semantic mashups by bridging Web 2.0 and the Semantic Web. It provides examples of how semantic data can be used flexibly across different domains and applications. The key benefits of semantic data include increased utility of applications by helping users complete tasks more easily, and allowing for greater efficiency by reusing existing data sources where possible.
Creating Semantic Mashups Bridging Web 2 0 And The Semantic Web Presentation 1jward5519
The document discusses creating semantic mashups by bridging Web 2.0 and the Semantic Web. It provides examples of how semantic data can be used flexibly from multiple sources to build applications and demonstrations. The key advantages of semantic data include increased utility of applications by helping users accomplish tasks more easily, and allowing for greater efficiency by reusing existing data sources where possible.
The document discusses the benefits of a federated and decentralized approach to knowledge and data on the web. It argues that centralized approaches like Big Data fail at web scale, as knowledge is inherently distributed and heterogeneous. A federated future based on light interfaces like Triple Pattern Fragments is envisioned, one where clients can query multiple data sources simultaneously for better performance and reliability compared to centralized endpoints. Serendipity and realistic expectations are important principles for this vision.
Optimizing RDF Data Cubes for Efficient Processing of Analytical QueriesKim Ahlstrøm
The document discusses optimizing RDF data cubes for efficient processing of analytical queries. It aims to denormalize cube dimensions to reduce expensive subject-object joins and increase subject-subject joins. This is done through an algorithm that takes snowflake pattern RDF data cubes as input and outputs star pattern or fully denormalized pattern RDF data cubes. The goal is to enable more efficient analytical querying on RDF data cubes through optimization techniques like denormalization.
The document discusses requirements and approaches for RDF stream processing (RSP). It covers the following key points in 3 sentences:
RSP aims to process continuous RDF streams to address scenarios like sensor data and social media. It involves querying streaming data, integrating streams with static data, and handling issues like imperfections. The document reviews existing RSP systems and languages, actor-based approaches, and the 8 requirements for real-time stream processing including keeping data moving, generating predictable outcomes, and responding instantaneously.
This document discusses RDF stream processing and the role of semantics. It begins by outlining common sources of streaming data on the internet of things. It then discusses challenges of querying streaming data and existing approaches like CQL. Existing RDF stream processing systems are classified based on their query capabilities and use of time windows and reasoning. The role of linked data principles and HTTP URIs for representing streaming sensor data is discussed. Finally, requirements for reactive stream processing systems are outlined, including keeping data moving, integrating stored and streaming data, and responding instantaneously. The document argues that building relevant RDF stream processing systems requires going beyond existing requirements to address data heterogeneity, stream reasoning, and optimization.
Tomas Knap | RDF Data Processing and Integration Tasks in UnifiedViews: Use C...semanticsconference
UnifiedViews is an ETL tool that allows users to define, execute, monitor, and share RDF data processing tasks. Three use cases are described where UnifiedViews was used to extract and annotate data from Atlassian and World Bank documents, and to create a publication tracker for Boehringer Ingelheim. Key benefits highlighted include the easy pipeline management interface, reuse of predefined plugins, and support for pipeline validation and debugging.
This document provides an overview of RDF stream processing and existing RDF stream processing engines. It discusses RDF streams and how sensor data can be represented as RDF streams. It also summarizes some existing RDF stream processing query languages and systems, including C-SPARQL, and the features they support like continuous execution, operators, and time-based windows. The document is intended as a tutorial for developers on working with RDF stream processing.
The document provides tips for a five part interview process: 1) Prepare before the interview by researching the company and dressing professionally. 2) Greet the interviewer positively. 3) Maintain good posture and provide thorough, honest answers to questions. 4) Ask relevant questions and follow up appropriately. 5) Send a thank you note after and follow up respectfully about the hiring decision. Key advice includes arriving early, making eye contact, having questions prepared, and following up to show continued interest in the position.
The third lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It presents an introduction to the Semantic Web taking a brief walk through in this 15 years of research, standardisation and industrial uptake.
This document provides an introduction to the Semantic Web, covering topics such as what the Semantic Web is, how semantic data is represented and stored, querying semantic data using SPARQL, and who is implementing Semantic Web technologies. The presentation includes definitions of key concepts, examples to illustrate technical aspects, and discussions of how the Semantic Web compares to other technologies. Major companies implementing aspects of the Semantic Web are highlighted.
The speaker discusses the semantic web and its potential to make data on the web smarter and more connected. He outlines several approaches to semantics like tagging, statistics, linguistics, semantic web, and artificial intelligence. The semantic web allows data to be self-describing and linked, enabling applications to become more intelligent. The speaker demonstrates a prototype semantic web application called Twine that helps users organize and share information about their interests.
The document discusses the development of the Semantic Web, which extends the current web to a web of data through the use of metadata, ontologies, and formal semantics. It describes key technologies like the Resource Description Framework (RDF) and Web Ontology Language (OWL) that add machine-readable meaning to web documents. The Semantic Web aims to enable machines to process and understand the semantics of information on the web.
The document discusses how museums can better connect and share their data online by exposing their structured collection data through technologies like XML, RDF, and semantic standards. This will allow for aggregation of data across museums, new ways for users to access and reuse museum data, and more opportunities for machine-to-machine integration and connections between cultural heritage institutions. While the full vision of the "Semantic Web" may not yet be realized, making museum data available in open, structured, and standardized ways online can provide immediate benefits.
The document introduces the Semantic Web and its goals of making web content machine-readable through the use of ontologies and semantic annotations. It describes the evolution of the web from human-readable documents and links to machine-processable data through technologies like XML, RDF, and OWL. It outlines current work by the W3C to develop standards and an active working group to develop the Semantic Web.
There has been plenty of hype around the Semanic Web, but will we ever see the vision of intelligent agents working on our behalf? This talk introduces the concepts of the Semantic Web as envisioned by Tim Berners-Lee over 10 years ago and compares that vision to where we have come since then. It includes a discussion of implementations such as XML, RDF, OWL (web ontology language), and SPARQL. After reviewing the design principles and enabling technologies, I plan to show how these techniques can be implemented in WebGUI.
The document provides an introduction to the semantic web, discussing its development from earlier metadata standards like Dublin Core. It explains the limitations of XML for representing semantics and the need for shared ontologies. The semantic web aims to add formal semantics to web content to enable software agents to process web resources like humans. Key technologies include RDF, RDF Schema, and DAML+OIL. Challenges include complexity, industry adoption, and trust.
These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
The document discusses the evolution of the semantic web from its origins in military technology to its current use in commercial applications. It describes how semantic web standards like RDF, RDFS, and OWL were developed and how the semantic web has transformed in areas like markets, linked data, and scaling. The talk outline focuses on the origins of the semantic web, key developments through 2010, transformations in three application areas, related markets and companies, and the linked data and scaling revolution.
The document discusses the need for a Semantic Web to address information overload on the current web. It explains that the Semantic Web aims to understand the meaning behind web pages by embedding semantics through techniques like RDF and microformats. This will allow computers to better understand and filter information, leading to a smarter online experience for users where they spend less time searching and viewing irrelevant content. Approaches to building the Semantic Web include bottom-up annotation of existing web pages and top-down extraction of entities from pages using natural language processing tools. Linked Data is seen as a key enabler of the Semantic Web by establishing linkages between data.
- Nova Spivack discussed the potential of the semantic web to connect all information on the web and enable more intelligent applications through semantic metadata and connections between data.
- He described different approaches to adding semantics like tagging, linguistics, and semantic web standards, advocating for a hybrid approach.
- Spivack introduced Twine, a new service he created to manage and share structured information on the web using semantics. Twine automatically organizes and links content.
The document discusses the history and development of the Semantic Web over the past 20 years. It begins with Tim Berners-Lee originally conceiving of the Semantic Web in 1994 with a vision of machines being able to understand web documents and perform tasks like property transfers. Since then, there has been over 200 talks on the Semantic Web but the focus was initially on technologies like XML, RDF, and OWL. More recently, Linked Data and RDFa have seen the most usage in applications while the ontology story remains unclear. Moving forward, bridging the gaps between linked data and formal ontology views will require addressing challenges like modeling incomplete and decentralized data at web-scale.
Recognos is a semantic technology company established in 1999 with offices in California and Romania. They have 70 employees conducting research and development into semantic technologies. Their applications include finance, CRM, life sciences, and more. Semantic technology aims to teach machines human reasoning by representing knowledge as statements describing concepts, logic, and relationships. This allows for integrated querying across structured and unstructured data sources. Recognos can help companies like Netflix develop semantic applications such as integrated search across data sources and detecting similarities in film descriptions.
This document discusses RDF stream processing and the role of semantics. It begins by outlining common sources of streaming data on the internet of things. It then discusses challenges of querying streaming data and existing approaches like CQL. Existing RDF stream processing systems are classified based on their query capabilities and use of time windows and reasoning. The role of linked data principles and HTTP URIs for representing streaming sensor data is discussed. Finally, requirements for reactive stream processing systems are outlined, including keeping data moving, integrating stored and streaming data, and responding instantaneously. The document argues that building relevant RDF stream processing systems requires going beyond existing requirements to address data heterogeneity, stream reasoning, and optimization.
Tomas Knap | RDF Data Processing and Integration Tasks in UnifiedViews: Use C...semanticsconference
UnifiedViews is an ETL tool that allows users to define, execute, monitor, and share RDF data processing tasks. Three use cases are described where UnifiedViews was used to extract and annotate data from Atlassian and World Bank documents, and to create a publication tracker for Boehringer Ingelheim. Key benefits highlighted include the easy pipeline management interface, reuse of predefined plugins, and support for pipeline validation and debugging.
This document provides an overview of RDF stream processing and existing RDF stream processing engines. It discusses RDF streams and how sensor data can be represented as RDF streams. It also summarizes some existing RDF stream processing query languages and systems, including C-SPARQL, and the features they support like continuous execution, operators, and time-based windows. The document is intended as a tutorial for developers on working with RDF stream processing.
The document provides tips for a five part interview process: 1) Prepare before the interview by researching the company and dressing professionally. 2) Greet the interviewer positively. 3) Maintain good posture and provide thorough, honest answers to questions. 4) Ask relevant questions and follow up appropriately. 5) Send a thank you note after and follow up respectfully about the hiring decision. Key advice includes arriving early, making eye contact, having questions prepared, and following up to show continued interest in the position.
The third lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It presents an introduction to the Semantic Web taking a brief walk through in this 15 years of research, standardisation and industrial uptake.
This document provides an introduction to the Semantic Web, covering topics such as what the Semantic Web is, how semantic data is represented and stored, querying semantic data using SPARQL, and who is implementing Semantic Web technologies. The presentation includes definitions of key concepts, examples to illustrate technical aspects, and discussions of how the Semantic Web compares to other technologies. Major companies implementing aspects of the Semantic Web are highlighted.
The speaker discusses the semantic web and its potential to make data on the web smarter and more connected. He outlines several approaches to semantics like tagging, statistics, linguistics, semantic web, and artificial intelligence. The semantic web allows data to be self-describing and linked, enabling applications to become more intelligent. The speaker demonstrates a prototype semantic web application called Twine that helps users organize and share information about their interests.
The document discusses the development of the Semantic Web, which extends the current web to a web of data through the use of metadata, ontologies, and formal semantics. It describes key technologies like the Resource Description Framework (RDF) and Web Ontology Language (OWL) that add machine-readable meaning to web documents. The Semantic Web aims to enable machines to process and understand the semantics of information on the web.
The document discusses how museums can better connect and share their data online by exposing their structured collection data through technologies like XML, RDF, and semantic standards. This will allow for aggregation of data across museums, new ways for users to access and reuse museum data, and more opportunities for machine-to-machine integration and connections between cultural heritage institutions. While the full vision of the "Semantic Web" may not yet be realized, making museum data available in open, structured, and standardized ways online can provide immediate benefits.
The document introduces the Semantic Web and its goals of making web content machine-readable through the use of ontologies and semantic annotations. It describes the evolution of the web from human-readable documents and links to machine-processable data through technologies like XML, RDF, and OWL. It outlines current work by the W3C to develop standards and an active working group to develop the Semantic Web.
There has been plenty of hype around the Semanic Web, but will we ever see the vision of intelligent agents working on our behalf? This talk introduces the concepts of the Semantic Web as envisioned by Tim Berners-Lee over 10 years ago and compares that vision to where we have come since then. It includes a discussion of implementations such as XML, RDF, OWL (web ontology language), and SPARQL. After reviewing the design principles and enabling technologies, I plan to show how these techniques can be implemented in WebGUI.
The document provides an introduction to the semantic web, discussing its development from earlier metadata standards like Dublin Core. It explains the limitations of XML for representing semantics and the need for shared ontologies. The semantic web aims to add formal semantics to web content to enable software agents to process web resources like humans. Key technologies include RDF, RDF Schema, and DAML+OIL. Challenges include complexity, industry adoption, and trust.
These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
The document discusses the evolution of the semantic web from its origins in military technology to its current use in commercial applications. It describes how semantic web standards like RDF, RDFS, and OWL were developed and how the semantic web has transformed in areas like markets, linked data, and scaling. The talk outline focuses on the origins of the semantic web, key developments through 2010, transformations in three application areas, related markets and companies, and the linked data and scaling revolution.
The document discusses the need for a Semantic Web to address information overload on the current web. It explains that the Semantic Web aims to understand the meaning behind web pages by embedding semantics through techniques like RDF and microformats. This will allow computers to better understand and filter information, leading to a smarter online experience for users where they spend less time searching and viewing irrelevant content. Approaches to building the Semantic Web include bottom-up annotation of existing web pages and top-down extraction of entities from pages using natural language processing tools. Linked Data is seen as a key enabler of the Semantic Web by establishing linkages between data.
- Nova Spivack discussed the potential of the semantic web to connect all information on the web and enable more intelligent applications through semantic metadata and connections between data.
- He described different approaches to adding semantics like tagging, linguistics, and semantic web standards, advocating for a hybrid approach.
- Spivack introduced Twine, a new service he created to manage and share structured information on the web using semantics. Twine automatically organizes and links content.
The document discusses the history and development of the Semantic Web over the past 20 years. It begins with Tim Berners-Lee originally conceiving of the Semantic Web in 1994 with a vision of machines being able to understand web documents and perform tasks like property transfers. Since then, there has been over 200 talks on the Semantic Web but the focus was initially on technologies like XML, RDF, and OWL. More recently, Linked Data and RDFa have seen the most usage in applications while the ontology story remains unclear. Moving forward, bridging the gaps between linked data and formal ontology views will require addressing challenges like modeling incomplete and decentralized data at web-scale.
Recognos is a semantic technology company established in 1999 with offices in California and Romania. They have 70 employees conducting research and development into semantic technologies. Their applications include finance, CRM, life sciences, and more. Semantic technology aims to teach machines human reasoning by representing knowledge as statements describing concepts, logic, and relationships. This allows for integrated querying across structured and unstructured data sources. Recognos can help companies like Netflix develop semantic applications such as integrated search across data sources and detecting similarities in film descriptions.
The document discusses the Semantic Web, which refers to extending the current web by giving information well-defined meaning that computers can understand. It describes the evolution of the web from Web 1.0 to 3.0 and outlines key components that enable the Semantic Web like URIs, RDF, RDFS, OWL, and SPARQL. The technology brings benefits like improved search, interoperability, and opportunities for applications in areas like healthcare, e-learning, and more. Realizing its full potential will take generating vocabularies and developing applications that make use of shared semantic data.
Amit P. Sheth, “Relationships at the Heart of Semantic Web: Modeling, Discovering, Validating and Exploiting Complex Semantic Relationships,” Keynote at the 29th Conference on Current Trends in Theory and Practice of Informatics (SOFSEM 2002), Milovy, Czech Republic, November 22–29, 2002.
Keynote: http://www.sofsem.cz/sofsem02/keynote.html
Related paper: http://knoesis.wright.edu/?q=node/2063
Intro to the Semantic Web Landscape - 2011LeeFeigenbaum
An introduction to the Semantic Web landscape as it stands near the end of 2011. Includes an introduction to the core technologies in the Semantic Web technology stack.
This material was presented at the November, 2011, Cambridge Semantic Web meetup.
The document discusses different approaches for managing continuous data streams and flows of various sizes and speeds. It describes four types of streams: myriads of tiny flows that can be collected; continuous massive flows that cannot be stopped; continuous numerous flows that can turn into a torrent; and myriads of continuous flows of any size and speed that form an immense delta. It then outlines how each stream type can be managed using different data systems, including data stream management systems, time-series databases, event-based systems, and event-driven architectures.
Stream reasoning is an approach that blends artificial intelligence and stream processing to make sense of multiple, heterogeneous data streams in real-time. It allows querying and reasoning over data streams using ontologies to represent streaming data. Deductive stream reasoning uses rules and ontologies while inductive stream reasoning uses machine learning to continuously learn from streaming data and adapt to concept drift. Stream reasoning has been studied in over 1000 scientific papers in the last 12 years and shows promise in addressing the challenges of volume, velocity, variety and veracity in big streaming data.
While the state of the art in Machine Learning offers practitioners effective tecniques to deal with static data sets, there are only accademic results tailored to data streams. In this presentation for the 4th Stream Reasoning workshop, I report on an effort of Alessio Bernardo (a student of mines) to set up a benchmark enviroment to (i) repeat academic results, (ii) perform studies on real data for confirming the academic results, and (iii) study the research problem of "incremental rebalancing learning on evolving data streams".
HiPPO and Flipism are no longer the only way to take decisions. In the Big Data / Data Science era one can dream of data-driven organization. If the data were "oil", Big Data technologies extract, transport, and store it, while Data Science methods provide the a way to "refine the crude oil". This presentation elaborates on the Ws (What, Why, When, Who and How) of Big Data and Data Science.
From the semantic interoperability problem to Google's knowledge graph passing from the Semantic Web, Linked Data, Yahoo! search monkey, Facebook Open Graph, and schema.org.
La Città dei Balocchi, con le sue luci, è un evento chiave nel panorama dell'offerta turistica Natalizia Lombarda. La presentazione riporta i risultati di un'analisi di chi è venuto e quando.
Realizzato da Fluxedo srl e Olivetti spa per il Consorzio Como Turistica, con la collaborazione di Politecnico di Milano, TIM e Comune di Como, nel contesto del progetto CrowdInsights finanziato da EIT Digital.
Stream Reasoning: a summary of ten years of research and a vision for the nex...Emanuele Della Valle
Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and this talk analyses its growth. In the first part, it traces the main results obtained so far, by presenting the most prominent studies. It starts by an overview of the most relevant studies developed in the context of semantic web, and then it extends the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: the second part presents a set of open challenges and issues that stream reasoning will face in the next future.
ACQUA: Approximate Continuous Query Answering over Streams and Dynamic Linked...Emanuele Della Valle
Emanuele Della Valle presented ACQUA, an approach for approximately continuously answering SPARQL queries over streams and dynamic linked data sets. ACQUA uses a stream processing engine that registers queries once and continuously executes them. It handles queries with windows and service clauses by joining stream data with linked data using local replicas that are approximated and maintained under update budget constraints. Experimental results showed that different update policies in ACQUA perform best under different conditions depending on query selectivity. Future work may expand the types of supported queries and study different data trends.
Stream reasoning: an approach to tame the velocity and variety dimensions of ...Emanuele Della Valle
Big Data tech can tame volume and velocity. Taming Variety in presence of volume and velocity is the real challenge. I’ve been working on taming variety and velocity simultaneously (Stream Reasoning) for 10 years, now. In this talk, I give you some examples of application domains where this is necessary. I explain where the Stream Reasoning community went so far in theory, applications and products. In particular I focus on my applications and my startup Fluxedo, which is offering real-time social media analytics across social networks. I conclude the talk discussing what comes next: 1) the need to focus on languages and abstractions able to easily capture user needs; 2) the need to find the sweet-spot between scalability and expressive semantics; 3) the need to used semantics to model more than the data access; and 4) the need to get over imperfect data. If you are exited, I did my job for today!
Every body talks about Big Data, but why? Do it create value? Do it enable some paradigmatic shifts in the way we work with data? This talk I did at ComoNext research and technological park cast some light on those questions.
Listening to the pulse of our cities with Stream Reasoning (and few more tech...Emanuele Della Valle
The document describes a system called CitySensing that analyzes social media and call data records to detect patterns and anomalies during large city-scale events like Milan Design Week. It continuously monitors these data streams, identifies anomalous levels of activity in different city neighborhoods, extracts relevant hashtags and entities from social media posts, and visualizes the insights for event managers and the public. The system uses stream reasoning to handle the high velocity and variety of the fused data sources in real-time. It was evaluated during Milan Design Week to understand crowd dynamics and activity across the city.
The forth lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It presents an introduction to RDF. It starts presenting the data model. Then it presents the turtle serialization. It compares XML vs. RDF. Finally, it provides few informations about RDFa and Linked Data.
The second lecture of the course I'm giving on "Interoperability and Semantic Technologies" at Politecnico di Milano in the academic year 2015-16. It discusses interoperability using HL7 v2 and v3 as examples of syntactic and semantic interoperability, respectively.
IST16-01 - Introduction to Interoperability and Semantic TechnologiesEmanuele Della Valle
This document introduces a course on interoperability and semantic technologies. It defines interoperability and its different levels, including functional and semantic interoperability. It also discusses challenges related to standardization in healthcare like the variety of standards, the need for translation between standards, and the high costs of a lack of interoperability. Finally, it presents how semantic technologies like RDF, SPARQL and OWL can help address these challenges by providing flexible models and languages that can embrace change and translation.
Stream reasoning: mastering the velocity and the variety dimensions of Big Da...Emanuele Della Valle
More and more applications require real-time processing of heterogeneous data streams. In terms of the “Vs” of Big Data (volume, velocity, variety and veracity), they require addressing velocity and variety at the same time. Big Data solutions able to handle separately velocity and variety have been around for a while, but only Stream Reasoning approaches those two dimensions at once. Current results in the Stream Reasoning field are relevant for application areas that require to: handle massive datasets, process data streams on the fly, cope with heterogeneous incomplete and noisy data, provide reactive answers, support fine-grained information access, and integrate complex domain models. This talk starting from those requirements, frames the problem addressed by Stream Reasoning. It poses the research question and operationalise it with four simpler sub-questions. It describes how the database group of Politecnico di Milano positively answered those sub-questions in the last 7 years of research. It briefly surveys alternative approaches investigated by other research groups world wide and it elaborates on current limitations and open challenges.
The 10 minutes presentation I gave at my PhD defence on 21.9.2015 in Amsterdam. Prof. Frank van Harmelen was my promoter. Prof. Ian Horrocks, prof. Manfred Hauswirth, prof. Geert-Jan Houben, Peter Boncz and prof. Guus Schreiber were my opponents.
Listening to the pulse of our cities fusing Social Media Streams and Call Dat...Emanuele Della Valle
The digital reflection of our cities is sharpening and it is tracking their evolution with a decreasing delay. This happens thanks to the pervasive deployment of sensors, the wide adoption of smart phones, the usage of (location-based) social networks and the availability of datasets about urban environment. So while data becomes every day more abundant, decision makers face the challenge to increase their capability to create value out of the analysis of this data. This key note presents how advance visual analytics, ontology base data access and information flow processing methods can help in making sense of Social Media Streams and Call Data Records from Mobile Network Operators during city scale events. Real-world deployments demonstrate the ability of those methods to advance our ability to feel the pulse of our cities in order to deliver innovative services.
C’è un modo di raccontare un evento che passa attraverso la lettura dei flussi social che genera. Quella traccia digitale che ogni partecipante lascia sui social network quando condivide la sua partecipazione o la sua opinione. E’ possibile fondere e interpretare in tempo reale tali tracce utilizzando tecnologie d’analisi d’avanguardia e modelli avanzati di visualizzazione dei dati. Nel 2014 in collaborazione con StudioLabo e Telecom Italia, il Politecnico di Milano ha realizzato CitySensing, per mostrare l’impronta lasciata dal FuoriSalone sui social network. Focalizzando, in seguito, CitySensing sulle esigenze del gestore dell’evento, il Politecnico di Milano ha mostrato la potenzialità dell’approccio per il Festival della Comunicazione di Camogli e per il Festival delle Letterature di Pescara. La soluzione è ora offerta da Fluxedo.
C'è un modo di racocontare la città che passa attraverso la lettura dei flussi di dati che essa genera. Quelle tracce digitali che ciascuno di noi lascia ogni volta che compie un piccolo gesto quotidiano, come fare una telefonata o inviare un tweet.
In City Data Fusion, il Politecnico di Milano e Telecom Italia raccontano le città fondendo, interpretando e visualizzando i Big Data, ovvero quell'enorme e continuo flusso di tracce digitali che i loro abitanti e visitotori lasciano utilizzando il proprio smartphone o i servizi della città.
Questa presentazione vi introduce all'osservazione alcune città italiane in una prospettiva nuova.
Bi-later integration are a short term approach to business integration, but only standards provide a long term solution. Unfortunately, agreeing on standards is hard and takes time, thus translation between standards is unavoidable. Embracing change is the only way to benefit from short term translation while developing over time comprehensive standards. Semantic technologies are design with flexibility in mind and, therefore, they can help in developing more comprehensive standards and easier to maintain translations.
The fundamental misunderstanding in Team TopologiesPatricia Aas
In this talk I will break down the argument presented in the book and argue that it is fundamentally ill-conceived, building on weak and erroneous assumptions. And that this leads to a "solution" that is not only flawed, but outright wrong, and might cost your organization vast sums of money for far inferior results.
Planetek Italia is an Italian Benefit Company established in 1994, which employs 130+ women and men, passionate and skilled in Geoinformatics, Space solutions, and Earth science.
We provide solutions to exploit the value of geospatial data through all phases of data life cycle. We operate in many application areas ranging from environmental and land monitoring to open-government and smart cities, and including defence and security, as well as Space exploration and EO satellite missions.
Is Your QA Team Still Working in Silos? Here's What to Do.marketing943205
Often, QA teams find themselves working in silos: the mobile team focused solely on app functionality, the web team on their portal, and API testers on their endpoints, with limited visibility into how these pieces truly connect. This separation can lead to missed integration bugs that only surface in production, causing frustrating customer experiences like order errors or payment failures. It can also mean duplicated efforts, communication gaps, and a slower overall release cycle for those innovative F&B features everyone is waiting for.
If this sounds familiar, you're in the right place! The carousel below, "Is Your QA Team Still Working in Silos?", visually explores these common pitfalls and their impact on F&B quality. More importantly, it introduces a collaborative, unified approach with Qyrus, showing how an all-in-one testing platform can help you break down these barriers, test end-to-end workflows seamlessly, and become a champion for comprehensive quality in your F&B projects. Dive in to see how you can help deliver a five-star digital experience, every time!
Breaking it Down: Microservices Architecture for PHP Developerspmeth1
Transitioning from monolithic PHP applications to a microservices architecture can be a game-changer, unlocking greater scalability, flexibility, and resilience. This session will explore not only the technical steps but also the transformative impact on team dynamics. By decentralizing services, teams can work more autonomously, fostering faster development cycles and greater ownership. Drawing on over 20 years of PHP experience, I’ll cover essential elements of microservices—from decomposition and data management to deployment strategies. We’ll examine real-world examples, common pitfalls, and effective solutions to equip PHP developers with the tools and strategies needed to confidently transition to microservices.
Key Takeaways:
1. Understanding the core technical and team dynamics benefits of microservices architecture in PHP.
2. Techniques for decomposing a monolithic application into manageable services, leading to more focused team ownership and accountability.
3. Best practices for inter-service communication, data consistency, and monitoring to enable smoother team collaboration.
4. Insights on avoiding common microservices pitfalls, such as over-engineering and excessive interdependencies, to keep teams aligned and efficient.
Storage Setup for LINSTOR/DRBD/CloudStackShapeBlue
Deciding on a good storage layout is crucial for good performance and reliability on later operations of your LINSTOR/CloudStack installation. This session gave the attendees an overview on different storage setups (LVM-Thin, striping, ZFS) and explaining differences in failure domains and performance implications and how to use them in LINSTOR.
--
The CloudStack European User Group 2025 took place on May 8th in Vienna, Austria. The event once again brought together open-source cloud professionals, contributors, developers, and users for a day of deep technical insights, knowledge sharing, and community connection.
AI Unboxed - How to Approach AI for Maximum ReturnMerelda
Keynote for a client.
In this session, Merelda addressed common misconceptions about AI technologies, particularly the confusion between Predictive AI and Generative AI, and provided clarity on when to use each. Predictive AI analyzes historical data to forecast future outcomes, while Generative AI creates new content, from text to images, rapidly. Understanding the differences between these technologies is crucial for making informed, strategic decisions.
She introduced the three archetypes of AI adoption: Takers, Shapers, and Makers, inviting the audience to identify which role their organisation plays. Based on these archetypes, she presented industry-specific examples relevant to Schauenburg’s portfolio, showcasing how Predictive AI can drive operational efficiency (e.g., predicting equipment maintenance), while Generative AI enhances customer interactions (e.g., generating technical documents).
The session received a 10/10 rating from attendees for its practical insights and immediate applicability.
Wondershare Filmora 14.3.2 Crack + License Key Free for Windows PCMudasir
COPY & PASTE LINK 👉👉👉
https://pcsoftsfull.org/dl
Wondershare Filmora for Windows PC is an all-in-one home video editor with powerful functionality and a fully stacked feature set. Filmora has a simple drag-and-droptop interface, allowing you to be artistic with the story you want to create.
Stretching CloudStack over multiple datacentersShapeBlue
In Apache CloudStack, zones are typically perceived as single datacenters. But what if you need to extend your CloudStack deployment across multiple datacenters? How can you seamlessly distribute and migrate virtual machines across them? In this session, Wido den Hollander explored strategies, best practices, and real-world considerations for achieving a multi-datacenter CloudStack setup.
--
The CloudStack European User Group 2025 took place on May 8th in Vienna, Austria. The event once again brought together open-source cloud professionals, contributors, developers, and users for a day of deep technical insights, knowledge sharing, and community connection.
Engaging interactive session at the Carolina TEC Conference—had a great time presenting the intersection of AI and hybrid cloud, and discussing the exciting momentum the #HashiCorp acquisition brings to #IBM."
AI in Java - MCP in Action, Langchain4J-CDI, SmallRye-LLM, Spring AIBuhake Sindi
This is the presentation I gave with regards to AI in Java, and the work that I have been working on. I've showcased Model Context Protocol (MCP) in Java, creating server-side MCP server in Java. I've also introduced Langchain4J-CDI, previously known as SmallRye-LLM, a CDI managed too to inject AI services in enterprise Java applications. Also, honourable mention: Spring AI.
TrustArc Webinar: Cross-Border Data Transfers in 2025TrustArc
In 2025, cross-border data transfers are becoming harder to manage—not because there are no rules, the regulatory environment has become increasingly complex. Legal obligations vary by jurisdiction, and risk factors include national security, AI, and vendor exposure. Some of the examples of the recent developments that are reshaping how organizations must approach transfer governance:
- The U.S. DOJ’s new rule restricts the outbound transfer of sensitive personal data to foreign adversaries countries of concern, introducing national security-based exposure that privacy teams must now assess.
- The EDPB confirmed that GDPR applies to AI model training — meaning any model trained on EU personal data, regardless of location, must meet lawful processing and cross-border transfer standards.
- Recent enforcement — such as a €290 million GDPR fine against Uber for unlawful transfers and a €30.5 million fine against Clearview AI for scraping biometric data signals growing regulatory intolerance for cross-border data misuse, especially when transparency and lawful basis are lacking.
- Gartner forecasts that by 2027, over 40% of AI-related privacy violations will result from unintended cross-border data exposure via GenAI tools.
Together, these developments reflect a new era of privacy risk: not just legal exposure—but operational fragility. Privacy programs must/can now defend transfers at the system, vendor, and use-case level—with documentation, certification, and proactive governance.
The session blends policy/regulatory events and risk framing with practical enablement, using these developments to explain how TrustArc’s Data Mapping & Risk Manager, Assessment Manager and Assurance Services help organizations build defensible, scalable cross-border data transfer programs.
This webinar is eligible for 1 CPE credit.
Automating Call Centers with AI Agents_ Achieving Sub-700ms Latency.docxIhor Hamal
Automating customer support with AI-driven agents fundamentally involves integrating Speech-to-Text (STT), Large Language Models (LLM), and Text-to-Speech (TTS). However, simply plugging these models together using their standard APIs typically results in high latency, often 2-3 seconds, which is inadequate for smooth, human-like interactions. After three years of deep-diving into automation in SapientPro, I've identified several crucial strategies that reduce latency to below 700 milliseconds, delivering near-human conversational speed.
Automating Call Centers with AI Agents_ Achieving Sub-700ms Latency.docxIhor Hamal
Introduction to Semantic Web for GIS Practitioners
1. Introduction to Semantic Web for GIS Practitioners 3.5.2011, Como Emanuele Della Valle [email_address] http://emanueledellavalle.org
3. Agenda Introduction and Motivation Data Interchange on the Web: RDF Querying the Semantic Web: SPARQL Modelling data and knowledge for the Semantic Web: RDF-S and OWL Conclusions
4. Introduction The Web Today Large number of integrations - ad hoc - pair-wise Too much information to browse, need for searching and mashing up automatically Each site is “understandable” for us Computers don’t “understand” much ? Millions of Applications Search & Mash-up Engine 010 0 1 1 0 0 1101 10100 10 0010 01 101 101 01 110 1 10 1 10 0 1 1 0 1 0 1 0 0 1 1 0 1 1 1 10 0 1 101 0 1
9. Introduction Smart Machines Working examples found on the Web Image Processing retrievr: find by sketching http://labs.systemone.at/retrievr/ Audio Processing midomi: find by singing http://www.midomi.com/ […] Natural Language Processing semantic proxy: http://semanticproxy.opencalais.com/about.html Sensor Data Symbolic Description Image Processing Audio Processing Natural Language Processing […]
10. Introduction Smart Machines alone cannot bridge the gap … Natural Language Processing (NLP) meets Image Processing (IP) NLP : What does your eye see? IP : I see a sea NLP : You see a “c”? IP : Yes, what else could it be? [Source NLP Related Entertainment http://www.cl.cam.ac.uk/Research/NL/amusement.html] Sensor Data Symbolic Description Image Processing Natural Language Processing sea “ c” Semantic Gap
11. Introduction … smart data are need Natural Language Processing (NLP) meets Image Processing (IP) NLP : What does your eye see? IP : I see a wordnet:word-sea NLP : mmm, I see a wordnet:word-c IP : I believe we have different understanding of the world … NLP : So do I Sensor Data Symbolic Description Image Processing Natural Language Processing sea “ c” smart data The Semantic Web offers a set of standards that lowers the barriers to employ smart data at large scale
12. Introduction What a machine “understands” of the Web What we say to Web agents " For more information visit <a href= “http://www.ex.org”> my company </a> Web site. . .” What they “hear” " blah blah blah blah blah <a href= “http://www.ex.org”> blah blah blah </a> blah blah. . .” Jet this is enought to train them to achive tasks for us [ source http://www.thefarside.com/ ]
13. Introduction What does Google “understand”? Understanding that [page1] links [page2] page2 is interesting Google is able to rank results! “ The heart of our software is PageRank™, a system for ranking web pages […] (that) relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value .” http://www.google.com/technology/
14. Introduction The Semantic Web 1/4 “ The Semantic Web is not a separate Web, but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.” “ The Semantic Web”, Scientific American Magazine, Maggio 2001 http://www.sciam.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21 Key concepts an extension of the current Web in which information is given well-defined meaning better enabling computers and people to work in cooperation. Both for computers and people
15. Introduction The Semantic Web 2/4 “ The Semantic Web is not a separate Web, but an extension of the current one […] ” Web 1.0 The Web Today
16. Introduction The Semantic Web 3/4 “ The Semantic Web […] , in which information is given well-defined meaning […]” Human understandable but “only” machine-readable Human and machine “ understandable ” ? Web 1.0 Semantic Web
17. Introduction The Semantic Web 4/4 Semantic Web Fewer Integration - standard - multi-lateral […] better enabling computers and people to work in cooperation. Even More Applications Easier to understand for people More “understandable” for computers Semantic Mash-ups & Search
18. Introduction Linked Data Standards WebMGS 2010, 27.8.2010 View the full talk at http://www.ted.com/talks/view/id/484 !
19. Introduction Linking Open Data Project Goal: extend the Web with data commons by publishing open data sets using Semantic Web techs Visit http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData !
20. Introduction Example: BIO2RDF Peter Ansell, Model and prototype for querying multiple linked scientific datasets, Future Generation Computer Systems, Volume 27, Issue 3, March 2011, Pages 329-333
23. Introduction Example: LinkedGeoData LinkedGeoData is an effort to add a spatial dimension to the Semantic Web. uses the information collected by the OpenStreetMap project makes it available as an RDF knowledge base according to the Linked Data principles. interlinks this data with other knowledge bases in the Linking Open Data initiative.
24. Introduction Semantic Web “layer cake” Standardized Under Investigation Already Possible [ source http://www.w3.org/2007/03/layerCake.png ]
27. RDF in a nutshell Resource Description Framework The adaptation of the relational model to the Web give rise to RDF From T-tuples to Triples Any relational data can be represented as triples Row Key --> Subject Column --> Property Value --> Value
28. RDF in a nutshell Representing relational data in RDF (almost) E.g., geographical data Represented in RDF (almost) IT.2 Italy 1.298.972 Milano Milan Mailand Country Population Is a City Legend resource literal Name City Country Population IT.2 Italy 1.298.972 City Name IT.2 Milano IT.2 Milan IT.2 Mailand
29. RDF in a nutshell Representing relational data in RDF (almost) Two important problems Once out of the database internal ID (e.g., IT.2) becomes useless Once out of the database internal names of schema element (e.g., City) becomes useless as well RDF solves it by using URI Internal ID should be replaced by URI Internal schema names should be replaced by URI Values do (always) not need to be URI-fied http://sws.geonames.org/3173435/ http://www.geonames.org/countries/#IT 1.298.972 Milano Milan Mailand http://www.geonames.org/ontology#inCountry http://www.geonames.org/ontology#population http://www.w3.org/2000/01/rdf-schema#label http://www.geonames.org/ontology#P http://www.w3.org/1999/02/22-rdf-syntax-ns#type Legend resource literal
30. Which URI should we use? Popular ones! Data merge will take place automatically! RDF in a nutshell Representing data in RDF Q/A 1/4 http://sws.geonames.org/3173435/ http://www.geonames.org/countries/#IT http://www.geonames.org/ontology#inCountry + http://sws.geonames.org/3173435/ 20100 http://dbpedia.org/resource/Postalcode http://sws.geonames.org/3173435/ http://www.geonames.org/countries/#IT http://www.geonames.org/ontology#inCountry = 20100 http://dbpedia.org/resource/Postalcode
31. Where do I find popular URIs? A difficult question with no clear answer The best place to keep an eye on is the Linking Open Data Project http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData and in particular the following pages of the Wiki Data Sets http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets Semantic Web Search Engines http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/SemanticWebSearchEngines Common Vocabularies http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/CommonVocabularies RDF in a nutshell Representing data in RDF Q/A 2/4
32. What is a value? When shall we URI-fy a value? Literals cannot be used to merge different data set E.g., having chosen to represent postal codes as a string, merging different data sets using postal codes is impossible 20100 may refer to lots of different thing on the Web e.g., try http://images.google.com/images?q=20100 URI-fy any value that can be eventually used to merge different dataset and leave the other values as literals RDF in a nutshell Representing data in RDF Q/A 3/4 20100 http://dbpedia.org/resource/Postalcode 20100 http://dbpedia.org/resource/Postalcode + = ?
33. What if I cannot thing about a good URI? When no go URI exists, you can use blank nodes ( ) The following relational data … … can be translated in RDF, in the BIO vocabulary [1], as follows [1] http://vocab.org/bio/0.1.html RDF in a nutshell Representing data in RDF Q/A 4/4 1974-02-28 http://www.sofia.org/#me http://purl.org/vocab/bio/0.1/Birth http://purl.org/vocab/bio/0.1/Marriage 1995-08-04 http://purl.org/vocab/bio/0.1/event http://purl.org/vocab/bio/0.1/event http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/vocab/bio/0.1/date http://purl.org/vocab/bio/0.1/event http://purl.org/vocab/bio/0.1/date Advanced Person Bio Event Date Sofia Birth 1974-02-28 Sofia Marriage 1995-08-04
34. RDF in a nutshell Other data structure in RDF Trees can be represented in RDF Anything can be represented in RDF
35. RDF in a nutshell XML vs. RDF w.r.t. Evolving Data Scenario: Describe printer capabilities V1 has several features XML RDF
36. RDF in a nutshell XML vs. RDF w.r.t. Evolving Data V1.1 adds two features What effect on existing client software? Regenerate stubs? Recompile? Did any queries break? (Depends how they're written. Best programmers?) XML RDF
37. RDF in a nutshell XML vs. RDF w.r.t. Evolving Data V1.2 adds three more features What effect on existing client software? XML RDF
38. RDF in a nutshell XML vs. RDF w.r.t. Evolving Data V2 adds colors What effect on existing client software? XML RDF
39. RDF in a nutshell XML vs. RDF w.r.t. Evolving Data Version n combines printer, scanner, fax: Problem: How to combine trees? Printer and fax both have output paper settings (red) Scanner and fax both have input image settings (blue)
40. RDF in a nutshell XML vs. RDF w.r.t. Evolving Data Flexibility is important Products are always changing (competitive environment) People are always adding more features Graceful evolution is important Relational data is remarkably flexible XML syntax is important Lots of application, which use XML, are already available Lots of tools for XML are already available Trees alows for simple parsing without loading the entire model (i.e., XML parsing using SAX)
41. RDF in a nutshell Serializing RDF in XML W3C standardized an RDF/XML syntax [1] The basic idea is to insert an XML element for each node (sobject and value) and arc (predicate) Es. < rdf:RDF xmlns:rdf= ”http://www.w3.org/1999/02/22-rdf-syntax-ns#” xmlns:ex= ”http://www.example.org/” xmlns:sid= “URN:org:example:staffid:” xmlns:dc= ”http://purl.org/dc/elements/1.1/”> < rdf:Description rdf:about ="http://www.example.org/index.html "> < dc:creator > < rdf:Description rdf:about ="URN:org:example:staffid:85740"/> </ dc:creator > </rdf:Description> </rdf:RDF> [1] RDF/XML Syntax Specification available at http://www.w3.org/TR/rdf-syntax-grammar/ ex:index.html sid:85740 dc:creator property element Root tag
42. RDF in a nutshell Serializing RDF in XML A compact XML serialization of is <ex:pagina_web rdf:about="http://www.example.org/index.html"> <dc:creator> <ex:impiegato rdf:about="sid:55740" foaf:email="mailto:[email protected]"/> <dc:creator> </ex:pagina_web> Advanced
43. RDF in a nutshell Merging XML files 1/2 Suppose you have to merge the two following XML Merging the XML trees is difficult, but being RDF … <Park rdf:about="Yosemite"> <conteins> <Camp rdf:about="North-Pines"/> </conteins> <crossedBy> <Path rdf:about="S11"/> </crossedBy> </Park> <Camp rdf:about="North-Pines" locatedIn=" Yosemite "> <accessibleBy> <Path rdf:about="S11"/> </accessibleBy> </Camp> Yosemite North-Pines Park rdf : type rdf : type conteins Camp S11 rdf : type Path crossedBy Yosemite North-Pines rdf : type Camp S11 rdf : type Path accessibleBy locatedIn Advanced
44. RDF in a nutshell Merging XML files 2/2 It’s (just) a matter to merge the two RDF graphs NOTE: It works out nicely because both RDF/XML documents refer to the same resources and use the same vocabularies. U Yosemite North-Pines Park rdf : type rdf : type conteins Camp S11 Path accessibleBy crossedBy locatedIn rdf : type Advanced
45. RDF in a nutshell Serializing RDF in Turtle - namespaces RDF allows for serializations alternative to XML Turtle serialization is often used for teaching Semantic Web Technologies because triples are more evident Example @prefix sr: <http://www.streamreasoning.org/sr4ld2011/onto#> . @prefix skos: <http://www.w3.org/2004/02/skos/core#> . @prefix dbp: <http://dbpedia.org/resource/Category:> . sr:LaScala a sr:NamedPlace ; skos:subject dbp: Opera_houses_in_Italy . sr:GalleriaVittorioEmanueleII a sr:NamedPlace ; skos:subject dbp:Pedestrian_streets_in_Italy, dbp:Buildings_and_structures_in_Milan . sr:Duomo a sr:NamedPlace ; skos:subject dbp:ChurchesInMilan.
46. RDF in a nutshell Serializing RDF in Turtle - namespaces RDF allows for serializations alternative to XML Turtle serialization is often used for teaching Semantic Web Technologies because triples are more evident URI terms can be abbreviated using namespaces @prefix sr: <http://www.streamreasoning.org/sr4ld2011/onto#> . sr:LaScala rdf:type sr:NamedPlace . <http://www.w3.org/1999/ 02/22-rdf-syntax-ns#type> = ' a ' sr:LaScala a sr:NamedPlace .
47. RDF in a nutshell Serializing RDF in Turtle - Convience Syntax Abbreviating repeated subjects: sr:LaScala a sr:NamedPlace . sr:LaScala skos:subject dbp:Opera_houses_in_Italy . ... is the same as ... sr:LaScala a sr:NamedPlace ; skos:subject dbp:Opera_houses_in_Italy . Abbreviating repeated subject/predicate pairs: sr:GalleriaVittorioEmanueleII skos:subject dbp:Pedestrian_streets_in_Italy . sr:GalleriaVittorioEmanueleII skos:subject dbp:Buildings_and_structures_in_Milan. ... is the same as ... sr:GalleriaVittorioEmanueleII skos:subject dbp:Pedestrian_streets_in_Italy, dbp:Buildings_and_structures_in_Milan .
48. RDF in a nutshell RDF Resources RDF at the W3C - primer and specifications http://www.w3.org/RDF/ Semantic Web tools - community maintained list; includes triple store, programming environments, tool sets, and more http://esw.w3.org/topic/SemanticWebTools 302 Semantic Web Videos and Podcasts - includes a section specifically on RDF videos http://www.semanticfocus.com/blog/entry/title/302-semantic-web-videos-and-podcasts/
50. SPARQL in a nutshell What is SPARQL? SPARQL is the query language of the Semantic Web stays for S PARQL P rotocol a nd R DF Q uery L anguage A Query Language ...: find named place : PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> SELECT ?poi WHERE { ?poi a sr:NamedPlace . } ... and a Protocol. http://lod.openlinksw.com/sparql?&query=PREFIX+sr%3A+%3Chttps%3A%2F%2Fptop.only.wip.la%3A443%2Fhttp%2Fwww.streamreasoning.org%2Fsr4ld2011%2Fonto%2F%3E%0D%0ASELECT+%3Fpoi+WHERE+{+%3Fpoi+a+sr%3ANamedPlace+.+}
51. SPARQL in a nutshell Why SPARQL? SPARQL let us Pull values from structured and semi-structured data represented in RDF Explore RDF data by querying unknown relationships Perform complex joins of disparate RDF repositories in a single query Transform RDF data from one vocabulary to another Develop higher-level cross-platform application
52. SPARQL in a nutshell Anatomy of a SPARQL query
53. SPARQL in a nutshell Anatomy of a SPARQL SELECT query
54. SPARQL in a nutshell Triple Pattern Syntax Turtle-like: URIs, QNames, literals, convenience syntax. Adds variables to get basic graph patterns ?var Variable names are a subset of NCNames (no "-" or ".") E.g., simple ?poi a sr:NamedPlace . a bit more complex ?poi a geo:NamedPlace . ?poi skos:subject ?category . Adds OPTIONAL to cope with semi-structured nature of RDF FILTER to select solution according to some criteria UNION operator to get complex patterns
55. SPARQL in a nutshell Writing a Simple Query Data @prefix sr:<http://www.streamreasoning.org/sr4ld2011/onto#> . sr:LaScala a sr:NamedPlace . sr:GalleriaVittorioEmanueleII a sr:NamedPlace . sr:Duomo a sr:NamedPlace . Query PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> SELECT ?poi WHERE { ?poi a sr:NamedPlace . } Results a = rdf:type ?poi http://www.streamreasoning.org/sr4ld2011/data#GalleriaVittorioEmanueleII http://www.streamreasoning.org/sr4ld2011/data#LaScala http://www.streamreasoning.org/sr4ld2011/data#Duomo
56. SPARQL in a nutshell Matching Matches the graph means find a set of bindings such that the substitution of variables for values creates a triple that is in the set of triples making up the graph. Solution 1: variable poi has value sr:GalleriaVittorioEmanueleII Triple sr: GalleriaVittorioEmanueleII a sr:NamedPlace . is in the graph. Solution 2: variable poi has value sr: LaScala Triple sr: LaScala a sr:NamedPlace . is in the graph. Solution 3: variable poi has value sr: Duomo Triple sr: Duomo a sr:NamedPlace . is in the graph. No order of solutions in this query.
57. SPARQL in a nutshell Writing a bit more complex query Query PREFIX skos: <http://www.w3.org/2004/02/skos/core#> PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> SELECT ?poi ?category WHERE { ?poi a geo:NamedPlace ; skos:subject ?category . } Results ?poi ?category http://www.streamreasoning.org/sr4ld2011/data#GalleriaVittorioEmanueleII http://dbpedia.org/resource/Category:Pedestrian_streets_in_Italy http://www.streamreasoning.org/sr4ld2011/data#GalleriaVittorioEmanueleII http://dbpedia.org/resource/Category:Buildings_and_structures_in_Milan http://www.streamreasoning.org/sr4ld2011/data#LaScala http://dbpedia.org/resource/Category:Opera_houses_in_Italy http://www.streamreasoning.org/sr4ld2011/data#Duomo http://dbpedia.org/class/yago/ChurchesInMilan … …
58. SPARQL in a nutshell Basic Graph Patterns A Basic Graph Patter is a set of triple patterns, all of which must be matched . In this case m atches the graph means find a set of bindings such that the substitution of variables for values creates a subgraph that is in the set of triples making up the graph.
59. SPARQL in a nutshell Matching RDF literals – text Query PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> SELECT ?poi WHERE { ?poi sr:name "Duomo". } Results Alert! It may return 0 results if the literal have a language tag E.g., if data contains only the triple sr: Duomo sr:name "Duomo"@it . To obtain results also add the language tag to the triple pattern E.g, ?poi sr:name "Duomo"@it. ?poi http://www.streamreasoning.org/sr4ld2011/data#Duomo
60. SPARQL in a nutshell Matching RDF literals – numerical values As in the case of language tags, if the literals are typed (i.e., "3.14"^^xsd:float ), they do not match if they are not given explicitly. Query PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> PREFIX geo: < http://www.w3.org/2003/01/geo/wgs84_pos# > PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> SELECT ?poi WHERE { ?poi a sr:NamedPlace ; geo:lat "45.46416854858398" ^^xsd:float ; geo:long "9.191389083862305" ^^xsd:float . } Results ?poi http://www.streamreasoning.org/sr4ld2011/data#Duomo
61. SPARQL in a nutshell RDF Term Constraints SPARQL allows restricting solutions by applying the FILTER clause. An RDF term bound to a variable appears in the results if the FILTER expression, applied to the term, evaluates to TRUE. Query PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> SELECT ?poi ?lat ?log WHERE { ?poi geo:lat ?lat ; geo:long ?long . FILTER( ?lat>"45.46"^^xsd:float && ?lat<"45.47"^^xsd:float && ?long>"9.18"^^xsd:float && ?long<"9.20"^^xsd:float ) } Results ?poi http://www.streamreasoning.org/sr4ld2011/data#GalleriaVittorioEmanueleII http://www.streamreasoning.org/sr4ld2011/data#LaScala http://www.streamreasoning.org/sr4ld2011/data#Duomo
62. SPARQL in a nutshell RDF Term Constraints – regex SPARQL FILTERs allows also restricting values of strings using the regex() Query PREFIX sr: <http://www.streamreasoning.org/sr4ld2011/onto#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?poi ?c WHERE { ?poi rdfs:comment ?c . FILTER(regex(?c, "glass-vaulted arcades", "i" ))} Results ?poi ?c http://www.streamreasoning.org/sr4ld2011/data#GalleriaVittorioEmanueleII The Galleria Vittorio Emanuele II is a covered double arcade formed of two glass-vaulted arcades at right angles intersecting in an octagon, prominently sited on the northern side of the Piazza del Duomo in Milan, and connects to the Piazza della Scala.
63. SPARQL in a nutshell Value Tests Notation for value comparison: <, >, =, <=, >= and != Test functions Check if a variable is bound: BOUND Check the type of resource bound: isIRI, isBLANK, isLITERAL Accessing accessories: LANG, DATATYPE Logic operators: || and && Comparing strings: REGEX, langMatches Constructor functions: bool, dbl, flt, dec, int, dT, str, IRI Extensible Value Testing E.g., FILTER ( aGeo:distance(?axLoc, ?ayLoc, ?bxLoc, ?byLoc) < 10 ) . (see http://www.w3.org/TR/rdf-sparql-query/#extensionFunctions )
64. SPARQL in a nutshell Value Tests - Extensible Value Testing 1/2 Find all schools within a 5km radius around a specific location, and for each school find coffeeshops that are closer than 1km. PREFIX lgdo: <http://linkedgeodata.org/ontology/> SELECT ?schoolname ?schoolgeo ?coffeeshopname ?coffeeshopgeo WHERE { ?school a lgdo:School . ?school geo:geometry ?schoolgeo . ?school rdfs:label ?schoolname . ?coffeeshop a lgdo:CoffeeShop . ?coffeeshop geo:geometry ?coffeeshopgeo . ?coffeeshop rdfs:label ?coffeeshopname . FILTER( bif:st_intersects( ?schoolgeo,bif:st_point(4.892222,52.373056), 5) && bif:st_intersects(?coffeeshopgeo, ?schoolgeo, 1) ) . } Click here for query results on a Virtuoso endpoint used by LinkedGeoData project.
65. SPARQL in a nutshell Value Tests - Extensible Value Testing 2/2 Signature st_intersects(g1, g2, prec) Parameters g1 – The first geometry. g2 – The second geometry. prec – A tolerance for the matching in units of linear distance appropriate to the srid. Default is 0. Description Returns intersects between two geometries. If prec is supplied, this is a tolerance for the matching in units of linear distance appropriate to the srid. Both geometries should have the same srid. st_intersects is true if there is at least one point in common.
66. SPARQL in a nutshell More Sophisticated Graph Patterns RDF is "semi structured" and has no integrity constrains SPARQL addresses this issue with Group patterns match if all subpatterns match and all constraints are satisfied In SPARQL syntax, groups are { … } OPTIONAL graph patterns accommodate the need to add information to a result but without the query failing just because some information is missing. In SPARQL syntax, OPTIONAL { … } UNION graph patterns allows to match alternatives In SPARQL syntax, { … } UNION { … }
67. SPARQL in a nutshell Result Forms Besides selecting tables of values, SPARQL allows three other types of queries: ASK - returns a boolean answering, does the query have any results? CONSTRUCT - uses variable bindings to return new RDF triples DESCRIBE - returns server-determined RDF about the queried resources SELECT and ASK results can be returned as XML or JSON. CONSTRUCT and DESCRIBE results can be returned via any RDF serialization (e.g. RDF/XML or Turtle).
68. SPARQL in a nutshell SPARQL Resources SPARQL Frequently Asked Questions http://thefigtrees.net/lee/sw/sparql-faq SPARQL implementations - community maintained list of open-source and commercial SPARQL engines http://esw.w3.org/topic/SparqlImplementations Public SPARQL endpoints - community maintained list http://esw.w3.org/topic/SparqlEndpoints SPARQL extensions - collection of SPARQL extensions implemented in various SPARQL engines http://esw.w3.org/topic/SPARQL/Extensions
70. RDF-S/OWL in a nutshell Ontology definition Philosophy (400BC): Systematic explanation of Existence Neches (91): Ontology defines basic terms and relations comprising the vocabulary of a topic area as well as the rules for combining terms and relations to define extensions to the vocabulary Gruber (93): Explicit specification of a conceptualization Borst (97): Formal specification of a shared conceptualization Studer(98) Formal, explicit specification of a shared conceptualization
71. RDF-S/OWL in a nutshell What does it mean? Formal, explicit specification of a shared conceptualization Machine readable Several people agrees that such conceptual model is adequate to describe such aspects of the reality A conceptual model of some aspects of the reality It makes domain assumption explicit
72. RDF-S/OWL in a nutshell What is an Ontology? A model of (some aspect of) the world Introduces vocabulary relevant to domain e.g., anatomy Specifies meaning (semantics) of terms Heart is a muscular organ that is part of the circulatory system Formalised using suitable logic ∀ x.[ Heart (x)-> MuscolarOrgan (x)∧ ∃y.[ isPartOf (x,y )∧ CirculatorySystem (y)]] Shared among multiple people organizations
73. RDF-S/OWL in a nutshell How much explicit shall the specification be ? “ A little semantics, goes a long way” [James Hendler, 2001] Advanced
74. RDF-S/OWL in a nutshell A simple ontology Artist Piece Painter Paint paints Sculptor Sculpt sculpts creates
75. RDF-S/OWL in a nutshell Specifying classes, sub-classes and instances Creating a class RDFS: Artist rdf:type rdfs:Class . FOL: x Artist(x) Creating a subclass RDFS: Painter rdfs:subClassOf Artist . RDFS: Sculptor rdfs:subClassOf Artist . FOL: x [Painter(x) Sculptor(x) Artist(x)] Creating an instance RDFS: Rodin rdf:type Sculptor . FOL: Sculptor(Rodin) Artist Painter Sculptor Rodin
76. Creating a property RDFS: creates rdf:type rdf:Property . FOL: x y Creates(x,y) Using a property RDFS: Rodin creates TheKiss . FOL: Creates(Rodin, TheKiss) Creating subproperties RDFS: paints rdfs:subPropertyOf creates . FOL: x y [Paints(x,y) Creates(x,y)] RDFS: sculpts rdfs:subPropertyOf creates . FOL: x y [Sculpts(x,y) Creates(x,y)] RDF-S/OWL in a nutshell Specifying properties and sub-properties - - creates paints
77. RDF-S/OWL in a nutshell Specifying domain/range constrains Checking which classes and properties can be use together RDFS: creates rdfs:domain Artist . creates rdfs:range Piece . paints rdfs:domain Painter . paints rdfs:range Paint . sculpts rdfs:domain Sculptor . sculpts rdfs:range Sculpt . FOL: x y [Creates(x,y) Artist(x) Piece(y)] x y [Paints(x,y) Painter(x) Paint(y)] x y [Sculpts(x,y) Sculptor(x) Sculpt(y)]
78. RDF-S/OWL in a nutshell The ontology we specified Artist Piece Painter Paint paints Sculptor Sculpt sculpts creates
79. RDF-S/OWL in a nutshell RDF semantics (a part of it) if then x rdfs:subClassOf y . a rdf:type y . a rdf:type x . x rdfs:subClassOf y . x rdfs:subClassOf z . y rdfs:subClassOf z . x a y . x b y . a rdfs:subPropertyOf b . a rdfs:subPropertyOf b . a rdfs:subPropertyOf c . b rdfs:subPropertyOf c . x a y . x rdf:type z . a rdfs:domain z . x a u . u rdf:type z . a rdfs:range z . Read out more in RDF Semantics http://www.w3.org/TR/rdf-mt/
80. RDF-S/OWL in a nutshell RDF semantics at work Shared the ontology ... @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix ex: <http://www.ex.org/schema#> . ex:Sculptor rdfs:subClassOf ex:Artist . ex:Painter rdfs:subClassOf ex:Artist . ex:Sculpt rdfs:subClassOf ex:Piece. ex:Painting rdfs:subClassOf ex:Piece . ex:creates rdfs:domain ex:Artist . ex:creates rdfs:range ex:Piece. ex:sculpts rdfs:subPropertyOf ex:creates . ex:sculpts rdfs:domain ex:Sculptor . ex:sculpts rdfs:range ex:Sculpt . ... when transmitting the following triple … ex:Rodin ex:sculpts ex:TheKiss .
81. RDF-S/OWL in a nutshell Without Inference A recipient, that only understands XML syntax, receiving <RDF> <Description about="Rodin"> <sculpts resource="TheKiss"/> </Description> </RDF> can answer the following queries What does Rodin sculpt? RDF/Description[@about='Rodin']/sculpts/@resource Who does sculpt TheKiss? RDF/Description[sculpts/@resource='TheKiss']/@about Try out your self at http://www.mizar.dk/ XPath / but it cannot answer Who is Rodin? What is TheKiss? Is there any Sculptor/Scupts? Is there any Artist/Piece?
82. RDF-S/OWL in a nutshell Knowing the ontology and RDF semantics … A recipient, that knows the ontology and “understands” RDF semantics , Receiving Rodin sculpts TheKiss . Rodin TheKiss Artist Piece Painter Paint paints Sculptor Sculpt sculpts creates
83. RDF-S/OWL in a nutshell … a reasoner can answer 1/2 the previous queries What does Rodin sculpt? PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX ex: <http://www.ex.org/schema#> SELECT ?x WHERE { ex:Rodin ex:sculpts ?x } ?x = ex:TheKiss Who does sculpt TheKiss? WHERE { ex:Rodin ex:sculpts ?x } ?x = ex:Rodin and it can also answer Who is Rodin? WHERE { ex:Rodin a ?x } ?x = ex:Artist, ex:Sculptor, rdfs: Resource What is TheKiss? WHERE { ex:TheKiss a ?x } ?x = ex:Sclupt, ex:Piece, rdfs: Resource
84. RDF-S/OWL in a nutshell … a reasoner can answer 2/2 Is there any Sculptor? WHERE { ?x a ex:Sculptor} ?x = ex:Rodin Is the any Artist? WHERE { ?x a ex:Artist } ?x = ex:Rodin Is there any Sculpt? WHERE { ?x a ex:Sculpt } ?x = ex:TheKiss Is there any Piece? WHERE { ?x a ex:Piece } ?x = ex:TheKiss Is there any Paint? WHERE { ?x a ex:Paint } 0 results Is there any Painter? WHERE { ?x a ex:Painter } 0 results
85. RDF-S/OWL in a nutshell Reasoning and Query Answering SPARQL alone cannot answer queries that require reasoning but a reasoner can be exposed as a SPARQL service. Or a query can be rewritten in order to incorporate the ontology data SPARQL service Reasoner data SPARQL service Inferred data ontology data SPARQL service ontology Rewritten query Advanced
86. Given ontology O and query Q, use O to rewrite Q as Q’ so that, for any set of ground facts A contained in multiple databases: answer(Q, O ,A) = answer(Q’, ,A) The answer of the query Q using the ontology O for any set of ground facts A is equal to answer of a query Q’ without considering the ontology O Use (Global As View) mapping M to map Q’ to multiple SQL queries to the various databases RDF-S/OWL in a nutshell Reasoning and Information Integration Rewrite O Q Q ’ Map SQL M answer Advanced
87. RDF-S/OWL in a nutshell Query Rewriting Technique (basics) Example: Ontology Doctors treats patients Consultants are doctors Query Give me those that treats some patient For OWL2 QL, the rewriting results in a union of conjunctive queries Advanced
88. RDF-S/OWL in a nutshell Query Rewriting Technique (basics) Relationship between ontology and databases defined by mappings , e.g.: Note: the mapping can be partial, i.e., Consultant is non mapped Using the mapping the query resulting from the mapping can be translated in SQL Advanced
89. RDF-S/OWL in a nutshell More expressive power 1/3 RDFS is a light ontological language that allows for defining simple vocabularies. One may want also express Cardinality constrains (max, min, exactly) for properties usage Es. a Polygon has 3 or more edges x [Polygon(x) ≥3y Edge(y) Forms(y,x) ] Property types transitive e.g. hasAncestor is a transitive property: if A hasAncestor B and B hasAncestor C , then A hasAncestor C . x y z [HasAncestor(x,y) HasAncestor(y,z) HasAncestor(x,z) ] inverse e.g. sclupts has isSculptedBy as inverse property: if A sclupts B then B isSculptedBy A x y [Sculpts(x,y) IsSculptedBy(y,x) ] Advanced
90. RDF-S/OWL in a nutshell More expressive power 2/3 simmetric e.g. isCloseTo is a simmetric property: if A isCloseTo B then B isCloseTo A x y [IsCloseTo(x,y) IsCloseTo(y,x) ] Restrictions of usage for a specific property All values of property must be of a certain kind e.g. a D.O.C. Wine can be only produced by a Certified Wienery x y [DOCWine(x) Produces(x,y) CertifiedWienery(y)] Some values of property must be of a certain kind e.g. a Famous Painter must have painted some Famous Painting x [FamousPainter(x) y FamousPaint(y) IsPaintedBy(y,x)] A class is defined combining other classes (union, intersection, negation, ...) A white wine is a Wine and its color is “white” x [Wine(x) White(x)] Advanced
91. RDF-S/OWL in a nutshell More expressive power 3/3 Two instances refers to the same real object “ The Boss” and “Bruce Springsteen” are two names for the same person TheBoss = BruceSpringsteen Two classes refers to the same set “ Painters” in english and “Pittori” in italian x [Painter(x) Pittore(x)] Two properties refers to the same binary relationship “ Paints” in english and “Dipinge” in italian x y [Paints(x,y) Dipinge(x,y)] Advanced
92. RDF-S/OWL in a nutshell Expressivity vs. Tractability The more an ontological language is expressive the less is tractable the Web Ontology Language (OWL) comes with several profiles that offers different trade-offs between expressivity and tractability. Advanced
93. RDF-S/OWL in a nutshell OWL 1 and OWL 2 profiles OWL 1 defines only one fragment (OWL Lite) And it isn’t very tractable! OWL 2 defines several different fragments with Useful computational properties E.g., reasoning complexity in range LOGSPACE to PTIME Useful implementation possibilities E.g., Smaller fragments implementable using RDBs OWL 2 profiles OWL 2 EL, OWL 2 QL, OWL 2 RL
94. RDF-S/OWL in a nutshell OWL 2 EL Useful for applications employing ontologies that contain very large number of properties and/or classes Captures expressive power used by many large-scaleontologies E.g.; SNOMED CT, NCI thesaurus Features Included: existential restrictions, intersection, subClass,equivalentClass, disjointness, range and domain, object property inclusion possibly involving property chains, and data property inclusion, transitive properties, keys … Missing: include value restrictions, Cardinality restrictions (min, max and exact), disjunction and negation Maximal language for which reasoning (including query answering) known to be worst-case polynomial
95. RDF-S/OWL in a nutshell OWL 2 QL Useful for applications that use very large volumes of data, and where query answering is the most important task Captures expressive power of simple ontologies like thesauri, classifications, and (most of) expressive power of ER/UML schemas E.g., CIM10, Thesaurus of Nephrology, ... Features Included: limited form of existential restrictions, subClass, equivalentClass, disjointness, range & domain, symmetric properties, … Missing: existential quantification to a class, self restriction, nominals, universal quantification to a class, disjunction etc. Can be implemented on top of standard relational DBMS Maximal language for which reasoning (including query answering) is known to be worst case logspace (same as DB)
96. RDF-S/OWL in a nutshell OWL 2 RL Useful for applications that require scalable reasoning without sacrifying too much expressive power, and where query answering is the most important task Support most OWL features but with restrictions placed on the syntax of OWL 2 standard semantics only apply when they are used in a restricted way Can be implemented on top of rule extended DBMS E.g., Oracle’s OWL Prime implemented using forward chaining rules in Oracle 11g Related to DLP and pD* Allows for scalable ( polynomial) reasoning using rule-based technologies
97. RDF-S/OWL in a nutshell RDF -S/OWL Resources OWL Frequently Asked Questions http://www.w3.org/2003/08/owlfaq.html RDF-S/OWL implementations - community maintained list of open-source and commercial SPARQL engines http://esw.w3.org/topic/SemanticWebTools#head-d07454b4f0d51f5e9d878822d911d0bfea9dcdfd RDF-S Specification http://www.w3.org/TR/rdf-schema/ OWL Working Group Wiki http://www.w3.org/2007/OWL/wiki
98. Conclusions 1/2 Achievements Extending the Web with a data commons 27 billion triples 395 million links Vibrant, global RTD community Industrial uptake begins e.g., BBC, NYT, Eli Lilly Government sponsorship mainly in USA and UK, but something moves in EU as well
99. Conclusions 2/2 Challenges Coherence relatively few and expansive to maintain links Quality Partly low quality data and inconsistencies Performance Still substantial penalties compared to relational Data consumption Large-scale processing, schema mapping and data fusions still in its infancy Usability Missing direct end-user tools and network effect
100. Credits Introduction and RDF slides are inspired by “Fundamentals of the Semantic Web” by David Booth http://www.w3.org/2002/Talks/0813-semweb-dbooth/ SPARQL slides are partially based on WWW 2005 SPARQL Tutorial http://www.w3.org/2004/Talks/17Dec-sparql/ OWL 2 slides are partially based on “ OWL 2 Update” by Christine Golbreich http://esw.w3.org/topic/HCLSIG/F2F/2008-10_F2F?action=AttachFile&do=get&target=HCLSF2F2008-OWL2-CG.pdf “ Scalable Ontology-Based Information Systems ” by Ian Horrocks presented at EDBT/ICDT 2010 Joint Conference, Lausanne, Switzerland, March 26th, 2010. http://www.comlab.ox.ac.uk/people/ian.horrocks/Seminars/download/EDBT-2010.pdf Conclusions are based on “Towards the Linked Data Web” by Sören Auer http://www.slideshare.net/lod2project/towards-the-linked-data-web-sren-auer-2612011-brussels-belgium
102. Introduction to Semantic Web for GIS Practitioners 3.5.2011, Como Emanuele Della Valle [email_address] http://emanueledellavalle.org