The document provides an overview of graph databases and RDF databases, describing how graph databases store data as nodes and relationships while RDF databases use a triple store model of subject-predicate-object statements to represent data. Examples are given demonstrating how to model and query data using both graph and RDF databases. The document also discusses scientific articles related to querying RDF data from a graph database perspective.
This document provides an overview of the RDF data model. It discusses the history and development of RDF standards from 1997 to 2014. It explains that an RDF graph is made up of triples consisting of a subject, predicate, and object. It provides examples of RDF triples and their N-triples representation. It also describes RDF syntaxes like Turtle and features of RDF like literals, blank nodes, and language-tagged strings.
The slideset used to conduct an introduction/tutorial
on DBpedia use cases, concepts and implementation
aspects held during the DBpedia community meeting
in Dublin on the 9th of February 2015.
(slide creators: M. Ackermann, M. Freudenberg
additional presenter: Ali Ismayilov)
Introduction to DBpedia, the most popular and interconnected source of Linked Open Data. Part of EXPLORING WIKIDATA AND THE SEMANTIC WEB FOR LIBRARIES at METRO http://metro.org/events/598/
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
SPARQL introduction and training (130+ slides with exercices)Thomas Francart
Full SPARQL training
Covers all SPARQL : basic graph patterns, FILTERs, functions, property paths, optional, negation, assignation, aggregation, subqueries, federated queries.
Does not cover except SPARQL updates.
Includes exercices on DBPedia.
CC BY license
RDF is a general method to decompose knowledge into small pieces, with some rules about the semantics or meaning of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with knowledge expressed in RDF.
This document provides an overview of graph databases and their use cases. It begins with definitions of graphs and graph databases. It then gives examples of how graph databases can be used for social networking, network management, and other domains where data is interconnected. It provides Cypher examples for creating and querying graph patterns in a social networking and IT network management scenario. Finally, it discusses the graph database ecosystem and how graphs can be deployed for both online transaction processing and batch processing use cases.
This document discusses PySpark DataFrames. It notes that DataFrames can be constructed from various data sources and are conceptually similar to tables in a relational database. The document explains that DataFrames allow richer optimizations than RDDs due to avoiding context switching between Java and Python. It provides links to resources that demonstrate how to create DataFrames, perform queries using DataFrame APIs and Spark SQL, and use an example flight data DataFrame.
Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. It presents a SQL-like interface for querying data stored in various databases and file systems that integrate with Hadoop. The document provides links to Hive documentation, tutorials, presentations and other resources for learning about and using Hive. It also includes a table describing common Hive CLI commands and their usage.
Apache Sqoop Tutorial | Sqoop: Import & Export Data From MySQL To HDFS | Hado...Edureka!
** Hadoop Training: https://www.edureka.co/hadoop **
This Edureka PPT on Sqoop Tutorial will explain you the fundamentals of Apache Sqoop. It will also give you a brief idea on Sqoop Architecture. In the end, it will showcase a demo of data transfer between Mysql and Hadoop
Below topics are covered in this video:
1. Problems with RDBMS
2. Need for Apache Sqoop
3. Introduction to Sqoop
4. Apache Sqoop Architecture
5. Sqoop Commands
6. Demo to transfer data between Mysql and Hadoop
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This training module introduces Resource Description Framework (RDF) for describing data, including representing data as triples, graphs and syntax; it also introduces the SPARQL query language for querying and manipulating RDF data, covering SELECT, CONSTRUCT, DESCRIBE, and ASK query types and the structure of SPARQL queries. The module provides learning objectives and an overview of the content which includes an introduction to RDF and SPARQL with examples and pointers to further resources.
This document provides an introduction to linked data and open data. It discusses the evolution of the web from documents to interconnected data. The four principles of linked data are explained: using URIs to identify things, making URIs accessible, providing useful information about the URI, and including links to other URIs. The differences between open data and linked data are outlined. Key milestones in linked government data are presented. Formats for publishing linked data like RDF and SPARQL are introduced. Finally, the 5 star scheme for publishing open data as linked data is described.
This document provides an overview of NoSQL databases and compares them to relational databases. It discusses the different types of NoSQL databases including key-value stores, document databases, wide column stores, and graph databases. It also covers some common concepts like eventual consistency, CAP theorem, and MapReduce. While NoSQL databases provide better scalability for massive datasets, relational databases offer more mature tools and strong consistency models.
Spark is an open source cluster computing framework for large-scale data processing. It provides high-level APIs and runs on Hadoop clusters. Spark components include Spark Core for execution, Spark SQL for SQL queries, Spark Streaming for real-time data, and MLlib for machine learning. The core abstraction in Spark is the resilient distributed dataset (RDD), which allows data to be partitioned across nodes for parallel processing. A word count example demonstrates how to use transformations like flatMap and reduceByKey to count word frequencies from an input file in Spark.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
I used these slides for an introductory lecture (90min) to a seminar on SPARQL. This slideset introduces the RDF query language SPARQL from a user's perspective.
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sf2z6i
This CloudxLab Introduction to Spark SQL & DataFrames tutorial helps you to understand Spark SQL & DataFrames in detail. Below are the topics covered in this slide:
1) Introduction to DataFrames
2) Creating DataFrames from JSON
3) DataFrame Operations
4) Running SQL Queries Programmatically
5) Datasets
6) Inferring the Schema Using Reflection
7) Programmatically Specifying the Schema
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
These webinar slides are an introduction to Neo4j and Graph Databases. They discuss the primary use cases for Graph Databases and the properties of Neo4j which make those use cases possible. They also cover the high-level steps of modeling, importing, and querying your data using Cypher and touch on RDBMS to Graph.
This document provides a summary of linked data principles and examples. It discusses how linked data can help computers understand web data by structuring it using common standards like URIs, HTTP, RDF, and SPARQL. The key principles of linked data are explained, including using URIs to identify things, including useful information at those URIs, and linking to other URIs to discover more things. Examples of linked data applications in domains like academia, libraries, government, and media are also provided. The document concludes by discussing how linked data works technically using structured data, graphs, and W3C web standards.
Although RDF is a corner stone of semantic web and knowledge graphs, it has not been embraced by everyday programmers and software architects who need to safely create and access well-structured data. There is a lack of common tools and methodologies that are available in more conventional settings to improve data quality by defining schemas that can later be validated. Two technologies have recently been proposed for RDF validation: Shape Expressions (ShEx) and Shapes Constraint Language (SHACL). In the talk, we will review the history and motivation of both technologies. We will also and enumerate some challenges and future work with regards to RDF validation.
Conception et développement d’une place de marché B2CNassim Bahri
Mon mémoire de PFE pour le projet Conception et développement d'une place de marché destiné au grand public(B2C).
Le concept de cette place de marché est assez simple puisqu’il joue le rôle d’un portail qui met en relation les entreprises avec une grande masse de clients. Ce dernier est composé de plusieurs vitrines dont chacune est relative à une entreprise qui sera responsable de la gérer suite à un contrat avec la poste.
Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. It presents a SQL-like interface for querying data stored in various databases and file systems that integrate with Hadoop. The document provides links to Hive documentation, tutorials, presentations and other resources for learning about and using Hive. It also includes a table describing common Hive CLI commands and their usage.
Apache Sqoop Tutorial | Sqoop: Import & Export Data From MySQL To HDFS | Hado...Edureka!
** Hadoop Training: https://www.edureka.co/hadoop **
This Edureka PPT on Sqoop Tutorial will explain you the fundamentals of Apache Sqoop. It will also give you a brief idea on Sqoop Architecture. In the end, it will showcase a demo of data transfer between Mysql and Hadoop
Below topics are covered in this video:
1. Problems with RDBMS
2. Need for Apache Sqoop
3. Introduction to Sqoop
4. Apache Sqoop Architecture
5. Sqoop Commands
6. Demo to transfer data between Mysql and Hadoop
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This training module introduces Resource Description Framework (RDF) for describing data, including representing data as triples, graphs and syntax; it also introduces the SPARQL query language for querying and manipulating RDF data, covering SELECT, CONSTRUCT, DESCRIBE, and ASK query types and the structure of SPARQL queries. The module provides learning objectives and an overview of the content which includes an introduction to RDF and SPARQL with examples and pointers to further resources.
This document provides an introduction to linked data and open data. It discusses the evolution of the web from documents to interconnected data. The four principles of linked data are explained: using URIs to identify things, making URIs accessible, providing useful information about the URI, and including links to other URIs. The differences between open data and linked data are outlined. Key milestones in linked government data are presented. Formats for publishing linked data like RDF and SPARQL are introduced. Finally, the 5 star scheme for publishing open data as linked data is described.
This document provides an overview of NoSQL databases and compares them to relational databases. It discusses the different types of NoSQL databases including key-value stores, document databases, wide column stores, and graph databases. It also covers some common concepts like eventual consistency, CAP theorem, and MapReduce. While NoSQL databases provide better scalability for massive datasets, relational databases offer more mature tools and strong consistency models.
Spark is an open source cluster computing framework for large-scale data processing. It provides high-level APIs and runs on Hadoop clusters. Spark components include Spark Core for execution, Spark SQL for SQL queries, Spark Streaming for real-time data, and MLlib for machine learning. The core abstraction in Spark is the resilient distributed dataset (RDD), which allows data to be partitioned across nodes for parallel processing. A word count example demonstrates how to use transformations like flatMap and reduceByKey to count word frequencies from an input file in Spark.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
I used these slides for an introductory lecture (90min) to a seminar on SPARQL. This slideset introduces the RDF query language SPARQL from a user's perspective.
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sf2z6i
This CloudxLab Introduction to Spark SQL & DataFrames tutorial helps you to understand Spark SQL & DataFrames in detail. Below are the topics covered in this slide:
1) Introduction to DataFrames
2) Creating DataFrames from JSON
3) DataFrame Operations
4) Running SQL Queries Programmatically
5) Datasets
6) Inferring the Schema Using Reflection
7) Programmatically Specifying the Schema
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
Slides cover Spark core concepts of Apache Spark such as RDD, DAG, execution workflow, forming stages of tasks and shuffle implementation and also describes architecture and main components of Spark Driver. The workshop part covers Spark execution modes , provides link to github repo which contains Spark Applications examples and dockerized Hadoop environment to experiment with
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
These webinar slides are an introduction to Neo4j and Graph Databases. They discuss the primary use cases for Graph Databases and the properties of Neo4j which make those use cases possible. They also cover the high-level steps of modeling, importing, and querying your data using Cypher and touch on RDBMS to Graph.
This document provides a summary of linked data principles and examples. It discusses how linked data can help computers understand web data by structuring it using common standards like URIs, HTTP, RDF, and SPARQL. The key principles of linked data are explained, including using URIs to identify things, including useful information at those URIs, and linking to other URIs to discover more things. Examples of linked data applications in domains like academia, libraries, government, and media are also provided. The document concludes by discussing how linked data works technically using structured data, graphs, and W3C web standards.
Although RDF is a corner stone of semantic web and knowledge graphs, it has not been embraced by everyday programmers and software architects who need to safely create and access well-structured data. There is a lack of common tools and methodologies that are available in more conventional settings to improve data quality by defining schemas that can later be validated. Two technologies have recently been proposed for RDF validation: Shape Expressions (ShEx) and Shapes Constraint Language (SHACL). In the talk, we will review the history and motivation of both technologies. We will also and enumerate some challenges and future work with regards to RDF validation.
Conception et développement d’une place de marché B2CNassim Bahri
Mon mémoire de PFE pour le projet Conception et développement d'une place de marché destiné au grand public(B2C).
Le concept de cette place de marché est assez simple puisqu’il joue le rôle d’un portail qui met en relation les entreprises avec une grande masse de clients. Ce dernier est composé de plusieurs vitrines dont chacune est relative à une entreprise qui sera responsable de la gérer suite à un contrat avec la poste.
PFE :: Application de gestion des dus d'enseignementNassim Bahri
Mon mémoire de PFE pour le projet Conception et développement d'une
application de gestion des dus d'enseignement pour l'Ecole Supérieure d'Economie Numérique Manouba. Le but de cette application est de centraliser les données de l'école d'une part (les parcours, les unités d'enseignement,...) et de faciliter l'affectation des charges horaire d'enseignement d'un autre part. Ce projet à été réalisé en adoptant Scrum comme étant une méthodologie de conception et de gestion de projet.
The document discusses RDF Shapes, which are used to describe and validate RDF data. It provides examples of using ShEx and SHACL to define shapes for RDF graphs and validate instance data against those shapes. Key points covered include the differences between ShEx and SHACL, such as ShEx focusing on defining structures while SHACL adds target declarations, and how both can be used to generate validation reports.
This document provides an overview of querying linked data using SPARQL. It begins with an introduction and motivation for querying linked data. It then covers the basics of SPARQL including its components like prefixes, query forms, and solution modifiers. Several examples are provided demonstrating how to construct ASK, SELECT, and other types of SPARQL queries. The document also discusses SPARQL algebra and updating linked data with SPARQL 1.1.
This document discusses Neo4j, a graph database, and its applications. It provides an overview of Neo4j's graph platform and property graph model. It also covers Cypher, Neo4j's query language, and compares relational databases to graph databases. The document includes examples of Cypher queries for creating, updating, deleting, and merging nodes and relationships. It also discusses graph algorithms like Jaccard similarity that can be used with Neo4j. Finally, it outlines several use cases for Neo4j including analyzing movie co-stars, social networks, recommendations, and natural language processing.
Two graph data models : RDF and Property Graphsandyseaborne
This document provides an overview of two graph data models: RDF and Property Graphs. It describes the key components of each model, including triples for RDF and nodes/edges/properties for Property Graphs. It also discusses Apache projects that work with each model like Apache Jena for RDF and Apache TinkerPop, Spark, Giraph and Flink for Property Graphs. Finally, it notes that while the models have different focuses, they could potentially share technologies like storage and query capabilities.
SPARQL is a standardized query language for retrieving and manipulating data stored in RDF format. It was created by the RDF Data Access Working Group to provide querying of RDF stores. SPARQL supports four query forms: SELECT, CONSTRUCT, DESCRIBE, and ASK. It also defines a protocol for executing queries over HTTP. SPARQL has become a key technology for working with semantic data on the web.
Video of the presentation can be seen here: https://www.youtube.com/watch?v=uxuLRiNoDio
The Data Source API in Spark is a convenient feature that enables developers to write libraries to connect to data stored in various sources with Spark. Equipped with the Data Source API, users can load/save data from/to different data formats and systems with minimal setup and configuration. In this talk, we introduce the Data Source API and the unified load/save functions built on top of it. Then, we show examples to demonstrate how to build a data source library.
This course is a quick overview of the fundamentals of graph databases and graph queries, with a focus on RDF and SPARQL. It includes both simple and challenging hands-on exercises to practice and test your understanding.
The material for this course can be downloaded form the following link: https://github.com/paolo7/Introduction-to-Graph-Databases
2011 4IZ440 Semantic Web – RDF, SPARQL, and software APIsJosef Petrák
The document discusses the Semantic Web and RDF data formats. It provides an overview of RDF syntaxes like RDF/XML, N3, N-Triples, RDF/JSON, and RDFa. It also discusses software APIs for working with RDF data in languages like Java, PHP, and Ruby. The document outlines handling RDF data using statement-centric, resource-centric, and ontology-centric models, as well as named graphs. It provides examples of reading RDF data from files and querying RDF data using SPARQL.
SPARQL is a query language for retrieving and manipulating data stored in RDF format. It is a W3C recommendation similar to SQL for relational databases. SPARQL queries contain SELECT, FROM and WHERE clauses to identify result variables, specify the RDF dataset, and provide a basic graph pattern to match against the data. SPARQL can be used to query RDF knowledge bases and retrieve variable bindings or boolean results. Query results are returned in XML format according to the SPARQL Query Results specification.
The document discusses using JSON-LD and RDF to add semantic meaning to web APIs while maintaining compatibility with existing JSON formats. It explains how RDF uses triples to make statements about resources, and how JSON-LD allows embedding RDF semantics in JSON without changing the format. This allows merging data from multiple sources and facilitates data interchange and evolution of schemas over time.
Introduction to linked data and the semantic webDave Reynolds
Linked data provides a method for publishing structured data on the web in a way that allows for integration and reuse across different data silos. It works by applying the principles of the web to data publishing - assigning URIs to identify things and using these URIs along with HTTP to make statements about those things in a structured format called RDF. This allows the data to be linked together into a global web of data. Key benefits include facilitating data integration, enabling extensibility and incremental updates, and supporting querying and reasoning over the integrated data. Challenges include the complexity of the technical stack and lack of familiarity with concepts like ontologies and logical reasoning.
Drill can query JSON data stored in various data sources like HDFS, HBase, and Hive. It allows running SQL queries over JSON data without requiring a fixed schema. The document describes how Drill enables ad-hoc querying of JSON-formatted Yelp business review data using SQL, providing insights faster than traditional approaches.
ShEx is a language for validating RDF data. It allows defining shapes that specify constraints on nodes and triples. ShEx expressions can be used to validate if RDF graphs conform to the defined shapes. The ShEx language is inspired by languages like RelaxNG and provides different serialization formats like ShExC, ShExJ, and ShExR. There are open-source implementations of ShEx validators in languages like JavaScript, Scala, Ruby, Python, and Java. ShEx provides a concise way to define RDF shapes and validate instance data against those shapes.
This document provides an overview of relevant approaches for accessing open data programmatically and data-as-a-service (DaaS) solutions. It discusses common data access methods like web APIs, OData, and SPARQL and describes several DaaS platforms that simplify publishing and consuming open data. It also outlines requirements for a proposed open DaaS platform called DaPaaS that aims to address challenges in open data management and application development.
This document provides an overview of SPARQL, the SPARQL Query Language. It begins by explaining that SPARQL is an RDF query language designed to query graphs of RDF data. It then describes some key aspects of SPARQL including that it is based on matching graph patterns against RDF graphs, supports basic graph patterns through triple patterns, and allows for implicit and explicit joins. The document provides examples of SPARQL queries and discusses features like select-from-where structure, blank nodes, and group patterns.
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph DatabaseJimmy Angelakos
Presentation of an investigation into how Python's RDFLib and SQLAlchemy can be used to leverage PostgreSQL's capabilities to provide a persistent storage back-end for Graphs, and become the elusive practical RDF triple store for the Semantic Web (or simply help you export your data to someone who's expecting RDF)!
Talk presented at FOSDEM 2017 in Brussels on 04-05/02/2017. Practical & hands-on presentation with example code which is certainly not optimal ;)
Video:
MP4: http://video.fosdem.org/2017/H.1309/postgresql_semantic_web.mp4
WebM/VP8: http://ftp.osuosl.org/pub/fosdem/2017/H.1309/postgresql_semantic_web.vp8.webm
UNIT-5-PPT Computer Control Power of Power SystemSridhar191373
Introduction
Conceptual Model of the EMS
EMS Functions and SCADA Applications.
Time decomposition of the power system operation.
Open Distributed system in EMS
OOPS
This presentation showcases a detailed catalogue of testing solutions aligned with ISO 4548-9, the international standard for evaluating the anti-drain valve performance in full-flow lubricating oil filters used in internal combustion engines.
Topics covered include:
Video Games and Artificial-Realities.pptxHadiBadri1
🕹️ #GameDevs, #AIteams, #DesignStudios — I’d love for you to check it out.
This is where play meets precision. Let’s break the fourth wall of slides, together.
UNIT-1-PPT-Introduction about Power System Operation and ControlSridhar191373
Power scenario in Indian grid – National and Regional load dispatching centers –requirements of good power system - necessity of voltage and frequency regulation – real power vs frequency and reactive power vs voltage control loops - system load variation, load curves and basic concepts of load dispatching - load forecasting - Basics of speed governing mechanisms and modeling - speed load characteristics - regulation of two generators in parallel.
This presentation provides a comprehensive overview of a specialized test rig designed in accordance with ISO 4548-7, the international standard for evaluating the vibration fatigue resistance of full-flow lubricating oil filters used in internal combustion engines.
Key features include:
Structural Health and Factors affecting.pptxgunjalsachin
Structural Health- Factors affecting Health of Structures,
Causes of deterioration in RC structures-Permeability of concrete, capillary porosity, air voids, Micro cracks and macro cracks, corrosion of reinforcing bars, sulphate attack, alkali silica reaction
Causes of deterioration in Steel Structures: corrosion, Uniform deterioration, pitting, crevice, galvanic, laminar, Erosion, cavitations, fretting, Exfoliation, Stress, causes of defects in connection
Maintenance and inspection of structures.
Department of Environment (DOE) Mix Design with Fly Ash.MdManikurRahman
Concrete Mix Design with Fly Ash by DOE Method. The Department of Environmental (DOE) approach to fly ash-based concrete mix design is covered in this study.
The Department of Environment (DOE) method of mix design is a British method originally developed in the UK in the 1970s. It is widely used for concrete mix design, including mixes that incorporate supplementary cementitious materials (SCMs) such as fly ash.
When using fly ash in concrete, the DOE method can be adapted to account for its properties and effects on workability, strength, and durability. Here's a step-by-step overview of how the DOE method is applied with fly ash.
UNIT-4-PPT UNIT COMMITMENT AND ECONOMIC DISPATCHSridhar191373
Statement of unit commitment problem-constraints: spinning reserve, thermal unit constraints, hydro constraints, fuel constraints and other constraints. Solution methods: priority list methods, forward dynamic programming approach. Numerical problems only in priority list method using full load average production cost. Statement of economic dispatch problem-cost of generation-incremental cost curve –co-ordination equations without loss and with loss- solution by direct method and lamda iteration method (No derivation of loss coefficients)
Kevin Corke Spouse Revealed A Deep Dive Into His Private Life.pdfMedicoz Clinic
Kevin Corke, a respected American journalist known for his work with Fox News, has always kept his personal life away from the spotlight. Despite his public presence, details about his spouse remain mostly private. Fans have long speculated about his marital status, but Corke chooses to maintain a clear boundary between his professional and personal life. While he occasionally shares glimpses of his family on social media, he has not publicly disclosed his wife’s identity. This deep dive into his private life reveals a man who values discretion, keeping his loved ones shielded from media attention.
This presentation provides a detailed overview of air filter testing equipment, including its types, working principles, and industrial applications. Learn about key performance indicators such as filtration efficiency, pressure drop, and particulate holding capacity. The slides highlight standard testing methods (e.g., ISO 16890, EN 1822, ASHRAE 52.2), equipment configurations (such as aerosol generators, particle counters, and test ducts), and the role of automation and data logging in modern systems. Ideal for engineers, quality assurance professionals, and researchers involved in HVAC, automotive, cleanroom, or industrial filtration systems.
Bituminous binders are sticky, black substances derived from the refining of crude oil. They are used to bind and coat aggregate materials in asphalt mixes, providing cohesion and strength to the pavement.
Design of a Hand Rehabilitation Device for Post-Stroke Patients..pptxyounisalsadah
Designing a hand rehabilitation device for post-stroke patients. Stimulation is achieved through movement and control via a program on a mobile phone. The fingers are not involved in the movement, as this is a separate project.
1. Graph and RDF Databases
Context : Course of Advanced Databases
Prepared by : Nassim BAHRI
February 19th, 2015
2. Table of contents
I. Introduction :Overview of BIG DATA & NOSQL
II. Graph Databases
III. RDF Databases
IV. Application example
V. Scientific article
VI. Conclusion and Q&A
4. Introduction : Data Model
4
Documents Databases
(Voldemort, Riak)
Big Table Column
(Hbase, cassandra, Hypertable)
Key-Value
(MongoDB)
Graph Databases
(Neo4J)
5. Introduction : Data Model
5
Data complexity
Datasize
Key-Value Stores
Column Family
Document Databases
Graph Databases
90% of use cases
This is what we
are interested
Source : Neo Technology webinar
6. Graph Databases
What is Graph Database?
A graph database is a databases whose
specific purpose is the storage of graph-oriented data
structures.
Is simply an object oriented database based on Graph
theory.
6
7. Graph Databases
Representation
• Nodes
• Relationships between nodes
• Properties on both
7
2
3
1
Name : John
Age : 43
Name : Google
Type : Ford
Color : blue
Work in
Since : 2013
9. Graph VS Relational Databases
Relational Database Modeling
ID Name
1 Larry Page
2 Sergey Brin
3 Larry Elisson
N …
ID Name
1 Google
2 Oracle
… …
N …
PersonID CompanyID Since
1 1 1998
2 1 2001
3 2 2010
Person
Company
WorksIn
SELECT Person.Name
FROM Person,Company,WorksIn
WHERE Company.Name='Google'
AND WorksIn.CompanyID=Company.ID
AND WorksIn.PersonId=Person.ID;
Google's employees?
Lookup
Lookup
Lookup
9
10. Graph VS Relational Databases
Graph Database Modeling
Name : Larry Page Name : Google
Name : Sergey Brin
Name : Oracle
Name : Larry Elisson
Person 1
Person 2
Company 1
Company 2
Person 3
WorksIN
Since : 2001
Since : 2010
Since : 1998
Lookup
10
11. Graph Databases
Graph storage and graph processing
1. The underlying storage
• Some databases use native graph storage,
• The other databases use relational database, an object-oriented database,…
2. The processing engine
• The nodes are physically connected to each other in database,
• index-free adjacency
11
13. Graph Databases : Example
Visual Modeling
13
Name : John
Age : 27 FRIEND_OF
Name : Sally
Age : 32
Title : Graph Databases
Authors : Ian Robinson,
Jim Webber
Since : 01/09/2013
Since : 01/09/2013
On : 02/09/2013
Rating : 4
On : 02/03/2013
Rating : 5
FRIEND_OF
14. Graph Databases : Example
Create a simple dataset
// Create Sally
CREATE (sally:Person { name: 'Sally', age: 32 })
// Create John
CREATE (john:Person { name: 'John', age: 27 })
// Create Graph Databases book
CREATE (gdb:Book { title: 'Graph Databases',
authors: ['Ian Robinson', 'Jim Webber'] })
// Connect Sally and John as friends
CREATE (sally)-[:FRIEND_OF { since: 1357718400 }]->(john)
// Connect Sally to Graph Databases book
CREATE (sally)-[:HAS_READ { rating: 4, on: 1360396800 }]->(gdb)
// Connect John to Graph Databases book
CREATE (john)-[:HAS_READ { rating: 5, on: 1359878400 }]->(gdb)
14
16. Graph Databases : Example
Simple selection from node:
Query 1 : How old are Sally?
MATCH (sally:Person { name: 'Sally' })
RETURN sally.age as sally_age
16
17. Graph Databases : Example
Simple selection from node:
Query 2 : Who are the authors of Graph Databases?
MATCH (gdb:Book { title: 'Graph Databases' })
RETURN gdb.authors as authors
17
18. Graph Databases : Example
Selection using relationship:
Query 3 : Who are sally's friends?
MATCH (sally:Person { name: 'Sally' })
MATCH (sally)-[r:FRIEND_OF]-(person)
RETURN person.name as sally_friend
18
19. Graph Databases : Example
Selection using relationship and group function:
Query 4 : What is the average rating of Graph Databases?
MATCH (gdb:Book { title: 'Graph Databases' })
MATCH (gdb)<-[r:HAS_READ]-()
RETURN avg(r.rating) as average_rating
19
20. Graph Databases : Example
Using order and limit in query:
Query 5 : Who Read Graph Databases First, Sally or John?
MATCH (people:Person)
WHERE people.name = 'John' OR people.name = 'Sally'
MATCH (people)-[r:HAS_READ]->(gdb:Book { title: 'Graph
Databases' })
RETURN people.name as first_reader
ORDER BY r.on
LIMIT 1
20
21. Graph Databases : Example
Visual Modeling
21
Name : John
Age : 27 FRIEND_OF
Name : Sally
Age : 32
Name : Alain
Age : 19
Since : 01/09/2013
Since : 01/09/2013
FRIEND_OF
Since : 01/11/2014
22. Graph Databases : Example
Completing our schema
// Create Alain
CREATE (alain:Person { name: 'Alain', age: 19 })
// Connect Sally and Alain as friends
MATCH (alain:Person { name: 'Alain' })
MATCH (sally:Person { name: 'Sally' })
CREATE (sally)-[:FRIEND_OF { since: 1358818400 }]->(alain)
22
Alain
Sally
John
GDB book
23. Graph Databases : Example
Node / relationship navigation:
Query 6 : Which is shared between Alain and John Friend?
MATCH (alain:Person { name: 'Alain' })
MATCH (john:Person { name: 'John' })
MATCH (alain)-[:FRIEND_OF]-(person)-[:FRIEND_OF]-(john)
RETURN person.name as friend
23
24. Graph Databases : Example
Update node’s properties:
Query 7 : Change Alain name to Larry
MATCH (n { name: 'Alain' })
SET n.name = 'Larry'
Query 8 : Remove property
MATCH (n { name: 'Larry' })
SET n.name = NULL
Query 9 : Add property
MATCH (n { name: 'John' })
SET n += { hungry: TRUE , position: 'Entrepreneur' } 24
25. RDF Databases
The principle of the web
25
HTTP Request
HTTP Response
URL : http://website.com
Communication protocol : HTTP
Representation language : HTML
26. RDF Databases
Changing status
26
URL URI IRI
Uniform Resource
Locator
Uniform Resource
Identifier
International Resource
Identifier
http://website.com http://animals.com#lion http://.الحيواناتtn#lion
29. RDF Databases
Data model & syntax
Description : (Subject, Predicate, object)
“example : doc.html is created by John and belongs to the music
theme”
29
Doc.html is created by John
Doc.html belongs to music theme
30. RDF Databases
Data model & syntax
(Subject, Predicate, object)
(Vertex, edge, Vertex)
30
John
Doc.html
Music
Author
Theme
31. RDF Databases
Labeled graph with URI and literals
31
http://www.website.com/john#me
http://www.website.com/doc.html
Music
http://www.website.com/schema#author
http://www.website.com/schema#theme
33. RDF Databases
SPARQL Protocol And RDF Query Language
• Syntax similar to SQL
SELECT data,
FROM data source
WHERE { conditions }
33
34. RDF Databases
SPARQL Protocol And RDF Query Language
?x rdf:type ex:Person
Get all person
SELECT ?subject ?property ?value
WHERE { ?subject ?property ?value }
Get the full Graph database
SELECT ?x WHERE
{ ?x rdf:type ex:Person .
?x :name ?name . }
Get all person who have a name
34
35. RDF Databases
SPARQL Protocol And RDF Query Language
Declaring prefixes
PREFIX esen : <http://esen.tn#>
SELECT ?student
WHERE {
?student esen:registeredAt ?x.
}
35
37. RDF Databases
SPARQL Protocol And RDF Query Language
Union
PREFIX foaf : <http://xmlns.com/foaf/0.1>
SELECT ?name
WHERE {
?person foaf:name ?name .
{
{?person foaf:homepage <http://john.info> .} UNION
{?person foaf:homepage <http://paul.info> .}
}
} 37
38. RDF Databases
SPARQL Protocol And RDF Query Language
Minus
PREFIX ex : <http://website.com#>
SELECT ?person
WHERE {
{ ?person rdf:type ?type }
MINUS { ?person rdf:type ex:student }
}
38
39. RDF Databases
Use case : rich snippets Google
39
<div xmlns:v="http://rdf.data-vocabulary.org/#"
typeof="v:Person">
My name is <span property="v:name">
Pierre Dumoulin</span>.
My personal homepage:
<a href="http://www.example.com" rel="v:url" >
www.homepage.com</a>I’m living is
<span rel="v:address" typeof="v:address">
<span property="v:street-address">12 street name</span>
<span property="v:locality">city name</span>
,<span property="v:region">XY</span>
<span property="v:postal-code">12345</span>.
<span>
</div>
40. Application example (RDF)
Data storage
# Default graph (stored at http://example.org/foaf/aliceFoaf)
@prefix foaf: <http://xmlns.com/foaf/0.1/>
. _:a foaf:name "Alice" .
. _:b foaf:mbox <mailto:[email protected]> .
. _:a foaf:mbox <mailto:[email protected]> .
Query
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?name
FROM <http://example.org/foaf/aliceFoaf>
WHERE { ?x foaf:name ?name }
40
Name
Alice
Result
41. Application example (Neo4J)
Question : Who is older, Sally or John?
41
Name : John
Age : 27 FRIEND_OF
Name : Sally
Age : 32
Name : Alain
Age : 19
Since : 01/09/2013
Since : 01/09/2013
FRIEND_OF
Since : 01/11/2014
42. Application example (Neo4J)
Who is older, Sally or John?
MATCH (people:Person)
WHERE people.name = 'John' OR people.name = 'Sally'
RETURN people.name as oldest
ORDER BY people.age DESC
LIMIT 1
42
43. Scientific article
Title : Querying RDF Data from a Graph Database Perspective
Book title : The Semantic Web: Research and Applications
Pages : 346-360
Online ISBN : 978-3-540-31547-6
Series Volume : 3532
Publisher : Springer Berlin Heidelberg
Copyright : 2005
Authors : Renzo Angles
Claudio Gutierrez
43
44. Scientific article
MODEL LEVEL DATA
COMPLEXITY
CONNECTIVITY TYPE OF DATA
Network physical simple high homogeneous
Relational logical simple low homogeneous
Semantic user simple/medium high homogeneous
Object-O logical/physical complex Medium heterogeneous
XML logical medium medium heterogeneous
RDF logical medium high heterogeneous
44
Table 1 : Summary of comparison among different database models
45. Scientific article
PROPERTY G G+ GraphLog Gram GraphDB Lorel F-G
Adjacent nodes +/- √ √ √ +/- √ +/-
Adjacent edges +/- √ √ √ +/- √ +/-
Degree of a node X √ √ x ? X x
Path √ √ √ √ √ √ √
Fixed-length Path √ √ √ √ √ √ √
Distance between two nodes X √ √ X ? x x
Diameter x √ √ X ? x X
45
Table 2 : Support of some graph database query languages for the example graph properties
46. Scientific article
PROPERTY RQL SeRQL RDQL Triple N3 Versa RxPath
Adjacent nodes +/- +/- +/- +/- +/- +/- X
Adjacent edges +/- +/- +/- +/- X x X
Degree of a node +/- x x x x x X
Path x x x x X x +/-
Fixed-length Path +/- +/- +/- +/- +/- X +/-
Distance between two nodes x x x x x x X
Diameter x x x x x x x
46
Table 3 : Support of some current RDF query languages for some example graph properties
47. Conclusion
• Using Graph database for storing data in graph form or in
hierarchical tree structure.
• Graph database : Performance, Agility, Flexibility
• The shortest path
47
48. Bibliography
[1] Ian Robinson, Jim Webber, and Emil Eifrem.«Graph Databases».O’REILLY, 2013.
[2] Serge Miranda, Fabien Gandon. «Des Bases de Données à Big Data». Course at
Nice Sophia university, MOOC, 2015.
[3] Michel Domenjoud. «Bases de données graphes : un tour d’horizon». Available
on <http://blog.octo.com/bases-de-donnees-graphes-un-tour-dhorizon> (consulted
18/02/2015).
[4] Neo4J community. «Cypher Query Language». Available on
<http://neo4j.com/developer/data-modeling/> (consulted 18/02/2015).
[5] Frank Manola, Eric Miller, Brian McBride. «RDF 1.1 Primer». Available on
<http://www.w3.org/TR/2014/NOTE-rdf11-primer-20140225/> (consulted
18/02/2015).
[6] Eric Prud'hommeaux, Andy Seaborne. «SPARQL Query Language for RDF».
Available on <http://www.w3.org/TR/rdf-sparql-query/> (consulted 18/02/2015).
[7] Neo4J community. «Introduction to graph databases webinar». Available on
<http://www.neo4j.org/learn/videos_webinar> (consulted 18/02/2015).
48
#2: Bonjour à tous et vous êtes les bienvenu. Ajoujourd’hui nous vous presenterons notre projet qui s’articule autours des bases de données orientées graphe et des base de données RDF dans le contexte du cours base de données avancée.
Cette présentation est élaborée par moi même nassim bahri et Nabila hosni qui malheureusement n’a pas pu assister avec nous ajourd’hui.
Commençons par présenter les axes importants de notre projet
#4: Face à l’explosion du volume d’information on s’est retrouvé à gérer une grande masse de données dont la structure devient de plus en plus complexe.
Pour faire face à cette problématique, un nouveau domaine technologique a vu le jour ; c’est qu’on appelle les « Big Data »
Ce domaine vise à proposer une alternative aux solutions traditionnelles de gestion des bases de données et d’analyse.
Parmi les technologies utilisées dans ce domaine nous pouvons citer le stockage des données en mémoire (accélérer le temps de traitement des requêtes) ou bien l’utilisation des bases de données réparties qui permet de distribuer le traitement sur plusieurs serveurs et enfin le fameux NOSQL.
Ce terme désigne une catégorie des SGBD qui n’est plus fondée sur l’architecture classique de base relationnelle. C’est à dire que l’unité logique de stockage des données n’est plus la table et les données ne sont en général pas manipulées par le langage SQL.
#5: 1-Ce modèle est modélisé sous la forme d’une collection de pair clé-valeur, basé sur un document publié par Amazon.
2-Ce modèle est basé sur des tables géantes et des familles de colonnes (chaque ligne peut avoir son propre schéma), basé un document publié par Google
3-Les données seront stockées dans des fichiers dans un format bien déterminé (json)
4-Inspiré de la théorie des graphes, basé sur un système de nœud et liaison aussi l’utilisation des pairs clé valeur.
#6: La question qui se pose à ce moment : quel model doit-on choisir ? Et qu’est ce fait la différence entre un modèle et un autres ? Pour répondre à cette question, nous présentons ces quatre modèle dans une matrice avec comme axe : la capacité de stockage (taille des données) et la complexité des données manipulées.
#7: Une base de données orientée graphe est une base de données orientée objet utilisant la théorie des graphes,
#8: Dans la base de données en graph trois points clés que nous devons connaître :
1-La première chose consiste à identifier les entités qui seront représentées par des nœuds : c’est l’équivalent des tables dans les BDR et des objets dans les BDOO,
2-Par la suite, nous devons identifier les relations entre les différents nœuds : dans le modèle relationnel c’est le faite de faire la jointure entre les différentes tables de même pour les BDOO,
3-Le dernier point, c’est les propriétés qui seront présentées par une paire clé-valeur et qui s’applique sur les nœuds et sur les relations.
#9: 1-Le raison majeure qui nous amène à choisir les bases de données en graph c’est leur performance. Contrairement aux bases de données relationnelles dont la performance diminue en augmentant la taille des données, les bases de données en graph se caractérisent par une performance stable.
2-Le second point fort des bases de données en graph c’est qu’ils sont naturellement additif, c’est-à-dire qu’on peut ajouter des nouveaux type de nœuds, de relations et même des sous-graph sans perturber le comportement de notre base.
3-Lorsqu’on dit agilité, on dit un processus itératif et incrémental. Les bases de données en graph nous permet de construire notre schéma de base de données me manière itérative en en ajoutant les nœuds et les relations au fur et au mesure en plus nous offre une API de programmation et un langage de requête.
#10: Dans le cas d’une modélisation relationnelle, l’exécution de cette requête nécessite de faire 3 parcours d’indexe qui sont optimisé en fonction des indexes et des clés étrangères déclarées.
#11: Dans le cas d’une modélisation en graph, l’exécution de la requête nécessite de faire un seul parcours d’index (pour trouver la société Google), puis un parcours par pointeur physique des relations dans le graphe.
Cet exemple reste extrêmement simple, mais met en évidence un cas d’usage sur lequel une base graphe s’avérera logiquement plus performante qu’une base relationnelle.
#12: Les bases de données en graph correspondent à tous système de stockage fournissant une adjacence des éléments voisins sans indexation. Tous voisin d’une entité est accessible par un pointeur physique.
#13: Il existe pas mail de système de gestion des bases de données orientées graphe: ceux qui propose un stockage native et ceux qui utilisent leurs propres système de stockage (relationnel ou autre). Nous avons choisi de présenter le reste des exemples en utilisant Neo4J qui est nativement orienté Graphe en plus il est très bien documenté et possède une communauté active.
#26: Passons maintenant à un autre modèle de base de données : Ce sont les bases de données RDF. Mais avant de se lancer dans le vif de ce sujet il faut rappeler quelques notions clés.
Le principe connu du web qui permet à des clients se connecter à un serveur web pour récupérer des pages et les afficher sur l'écran.
Dans cette architecture il y a 3 composants important:
Ces trois composants forment le web
#27: Nous avons maintenant un moyen pour identifier les ressources sur le web, il nous faut donc un outil pour décrire ces ressources. C’est-à-dire aller au-delà du texte affiché dans les pages web et notamment décrire ces ressources à travers des données structurées et utilisables directement par des applications.
Et pour ce faire nous utilisons le même principe de l’architecture client/serveur du web mais le résultat ne sera pas seulement de type page HTML, mais n’importe quel objet qui se trouve autours de nous.
#28: La solution est donc proposée et standardisée par le W3C selon le schéma suivant :
Faire des requêtes sur ces données
Echanger les schémas de ces données
Passons donc au premier étage de notre pile de standardisation qui est le langage RDF
#29: Tous ce qui peut avoir une URI
On va attacher à ces ressources des description structurés qui peuvent être manipulé et échanger par des applications
#31: Ces triplets peuvent être vue comme un graph dont le sujet et l’objet forment les nœud et le prédicat forme la relation entre les nœud adjacent.
#32: Comme on est sur le web ces étiquettes ne sont pas des mots libres, mais nous allons utiliser les moyens d’identification sur le web et notamment les URI
#33: En plus des modèles, RDF nous propose différents syntaxes pour structurer nos description notamment :
Nous avons choisi d’utiliser Turtle qui propose une syntaxe lisible et simple à comprendre
#34: En plus des modèles, RDF nous propose différents syntaxes pour structurer nos description notamment :
Nous avons choisi d’utiliser Turtle qui propose une syntaxe lisible et simple à comprendre
#48: Le choix entre les différents modèle proposé par la technologie NOSQL dépond vraiment de la taille des données et de la complexité de ces dernières.
L’utilisation des base de données orientées graphe sera la meilleur pour stocker des données sous forme de graphe, d’arbre ou de structure hiérarchique.
API intégrée permettant d’utiliser certains algorithmes classiques de la théorie des graphes (plus court chemin, Dijsktra, A*, calcul de centralité…)