This document provides an overview and agenda for a workshop on building GraphQL APIs with the Neo4j GraphQL library and Neo4j Aura. The agenda includes introductions to GraphQL concepts, the Neo4j GraphQL library, and hands-on exercises to create a GraphQL API backed by Neo4j Aura for an online bookstore application. Code samples and resources are also provided.
This document provides an overview of graph databases and their use cases. It begins with definitions of graphs and graph databases. It then gives examples of how graph databases can be used for social networking, network management, and other domains where data is interconnected. It provides Cypher examples for creating and querying graph patterns in a social networking and IT network management scenario. Finally, it discusses the graph database ecosystem and how graphs can be deployed for both online transaction processing and batch processing use cases.
Dynamic Rule-based Real-time Market Data AlertsFlink Forward
Flink Forward San Francisco 2022.
At Bloomberg, we deal with high volumes of real-time market data. Our clients expect to be notified of any anomalies in this market data, which may indicate volatile movements in the markets, notable trades, forthcoming events, or system failures. The parameters for these alerts are always evolving and our clients can update them dynamically. In this talk, we'll cover how we utilized the open source Apache Flink and Siddhi SQL projects to build a distributed, scalable, low-latency and dynamic rule-based, real-time alerting system to solve our clients' needs. We'll also cover the lessons we learned along our journey.
by
Ajay Vyasapeetam & Madhuri Jain
What is GraphQL? Why GraphQL? How to GraphQL?
Workshops introduction presentation
GraphQL Developers https://selleo.com/graphql-expert-developers-team
This document summarizes a presentation about the graph database Neo4j. The presentation included an agenda that covered graphs and their power, how graphs change data views, and real-time recommendations with graphs. It introduced the presenters and discussed how data relationships unlock value. It described how Neo4j allows modeling data as a graph to unlock this value through relationship-based queries, evolution of applications, and high performance at scale. Examples showed how Neo4j outperforms relational and NoSQL databases when relationships are important. The presentation concluded with examples of how Neo4j customers have benefited.
Learn how to build advanced GraphQL queries, how to work with filters and patches and how to embed GraphQL in languages like Python and Java. These slides are the second set in our webinar series on GraphQL.
Using Graph and Transformer Embeddings for Vector Based RetrievalSujit Pal
For the longest time, term-based vector representations based on whole-document statistics, such as TF-IDF, have been the staple of efficient and effective information retrieval. The popularity of Deep Learning over the past decade has resulted in the development of many interesting embedding schemes. Like term-based vector representations, these embeddings depend on structure implicit in language and user behavior. Unlike them, they leverage the distributional hypothesis, which states that the meaning of a word is determined by the context in which it appears. These embeddings have been found to better encode the semantics of the word, compared to term-based representations. Despite this, it has only recently become practical to use embeddings in Information Retrieval at scale.
In this presentation, we will describe how we applied two new embedding schemes to Scopus, Elsevier’s broad coverage database of scientific, technical, and medical literature. Both schemes are based on the distributional hypothesis but come from very different backgrounds. The first embedding is a graph embedding called node2vec, that encodes papers using citation relationships between them as specified by their authors. The second embedding leverages Transformers, a recent innovation in the area of Deep Learning, that are essentially language models trained on large bodies of text. These two embeddings exploit the signal implicit in these data sources and produce semantically rich user and content-based vector representations respectively. We will evaluate these embedding schemes and describe how we used the Vespa search engine to search these embeddings for similar documents within the Scopus dataset. Finally, we will describe how RELX staff can access these embeddings for their own data science needs, independent of the search application.
The presentation consist of the following,
- What is graph DB ?
- Why choose Graph DB ?
- Types of Graph DB (Based on storage)
- Janus Graph Architecture
- Janus Graph Basic Terms
- Conceptual working of Gremlin Queries.
- Setup Janus Graph on local
- Some sample queries
- Schema and data modelling
- Automatic Schema Maker
- Index in Janus Graph
The document discusses a presentation about connecting data and Neo4j. It covers data ecosystems and where different technologies fit, how Neo4j works as a graph database, and building graph-native organizations. It also discusses Neo4j's long term vision of connecting enterprise data and the state of data in 2018. Key points include how data structures have evolved from hierarchies to dynamic knowledge graphs and how different technologies like relational databases and Neo4j are suited for different types of queries and connected data problems.
These webinar slides are an introduction to Neo4j and Graph Databases. They discuss the primary use cases for Graph Databases and the properties of Neo4j which make those use cases possible. They also cover the high-level steps of modeling, importing, and querying your data using Cypher and touch on RDBMS to Graph.
This document discusses building a full stack graph application with Neo4j AuraDB, GitHub Actions, GraphQL, Next.js, and Vercel. It covers how to get data into Neo4j, build a GraphQL API with Neo4j and the GraphQL library, work with graphs on the frontend using GraphQL and React, and deploy the full application to Vercel. Code examples and resources are provided for each part of the process.
NiFi Best Practices for the EnterpriseGregory Keys
The document discusses best practices for implementing Apache NiFi in an enterprise. It recommends establishing a Center of Excellence (COE) to align stakeholders, provide guidance, and develop standards and processes for NiFi deployment. The COE should work with business leaders to understand data flow needs and ensure NiFi is delivering business value. When scaling NiFi across a large enterprise, it may make sense to have multiple semi-autonomous NiFi clusters for different business groups rather than one large cluster. Reusable templates, components, and patterns can help with development efficiencies.
The talk covers the following topics:
* Fundamental parts of a GraphQL server
* Defining API shape - GraphQL schema
* Resolving object fields
* Mutative APIs
* Making requests to a GraphQL server
* Solving N+1 query problem: dataloader
How to Utilize MLflow and Kubernetes to Build an Enterprise ML PlatformDatabricks
This document summarizes a presentation about utilizing MLFlow and Kubernetes to build an enterprise machine learning platform. It discusses challenges that motivated building such a platform, like lack of model management and difficult deployments. The solution presented abstracts data pipelines into modular components to standardize workflows. It also uses MLFlow to package and track models and experiments, and Kubernetes with Kubeflow to deploy models at scale. A demo shows implementing model serving with these tools.
Graph Databases and Machine Learning | November 2018TigerGraph
Graph Database and Machine Learning: Finding a Happy Marriage. Graph Databases and Machine Learning
both represent powerful tools for getting more value from data, learn how they can form a harmonious marriage to up-level machine learning.
This document provides an overview of a Neo4j basic training session. The training will cover querying graph patterns with Cypher, designing and implementing a graph database model, and evolving existing graphs to support new requirements. Attendees will learn about graph modeling concepts like nodes, relationships, properties and labels. They will go through a modeling workflow example of developing a graph model to represent airport connectivity data from a CSV file and querying the resulting graph.
The document discusses 10 tips and tricks for tuning Cypher queries in Neo4j. It covers using PROFILE to analyze query plans, avoiding unnecessary property reads, elevating properties to labels when possible, effectively using indexes and constraints, handling relationships efficiently, leveraging lists and map projections, implementing pattern comprehensions and subqueries, batching updates, and using user-defined procedures and functions. The final slides provide examples of special slide formatting and include placeholders for images or logos.
VictoriaLogs: Open Source Log Management System - PreviewVictoriaMetrics
VictoriaLogs Preview - Aliaksandr Valialkin
* Existing open source log management systems
- ELK (ElasticSearch) stack: Pros & Cons
- Grafana Loki: Pros & Cons
* What is VictoriaLogs
- Open source log management system from VictoriaMetrics
- Easy to setup and operate
- Scales vertically and horizontally
- Optimized for low resource usage (CPU, RAM, disk space)
- Accepts data from Logstash and Fluentbit in Elasticsearch format
- Accepts data from Promtail in Loki format
- Supports stream concept from Loki
- Provides easy to use yet powerful query language - LogsQL
* LogsQL Examples
- Search by time
- Full-text search
- Combining search queries
- Searching arbitrary labels
* Log Streams
- What is a log stream?
- LogsQL examples: querying log streams
- Stream labels vs log labels
* LogsQL: stats over access logs
* VictoriaLogs: CLI Integration
* VictoriaLogs Recap
Flink Streaming is the real-time data processing framework of Apache Flink. Flink streaming provides high level functional apis in Scala and Java backed by a high performance true-streaming runtime.
This document provides an overview of Apache Flink internals. It begins with an introduction and recap of Flink programming concepts. It then discusses how Flink programs are compiled into execution plans and executed in a pipelined fashion, as opposed to being executed eagerly like regular code. The document outlines Flink's architecture including the optimizer, runtime environment, and data storage integrations. It also covers iterative processing and how Flink handles iterations both by unrolling loops and with native iterative datasets.
Get Started with the Most Advanced Edition Yet of Neo4j Graph Data ScienceNeo4j
The document discusses Neo4j's graph data science capabilities. It highlights that Neo4j provides tools for graph algorithms, machine learning pipelines for tasks like node classification and link prediction, and a graph catalog for managing graph projections from the underlying database. The document also notes that Neo4j's capabilities allow users to leverage relationships in connected data to answer business questions.
Knowledge Graphs and Generative AI
Dr. Katie Roberts, Data Science Solutions Architect, Neo4j
It’s no secret that Large Language Models (LLMs) are popular right now, especially in the age of Generative AI. LLMs are powerful models that enable access to data and insights for any user, regardless of their technical background, however, they are not without challenges. Hallucinations, generic responses, bias, and a lack of traceability can give organizations pause when thinking about how to take advantage of this technology. Graphs are well suited to ground LLMs as they allow you to take advantage of relationships within your data that are often overlooked with traditional data storage and data science approaches. Combining Knowledge Graphs and LLMs enables contextual and semantic information retrieval from both structured and unstructured data sources. In this session, you’ll learn how graphs and graph data science can be incorporated into your analytics practice, and how a connected data platform can improve explainability, accuracy, and specificity of applications backed by foundation models.
This document discusses using Apache Kafka as a data hub to capture changes from various data sources using change data capture (CDC). It outlines several common CDC patterns like using modification dates, database triggers, or log files to identify changes. It then discusses using Kafka Connect to integrate various data sources like MongoDB, PostgreSQL and replicate changes. The document provides examples of open source CDC connectors and concludes with suggestions for getting involved in the Apache Kafka community.
The document outlines the plan and syllabus for a Data Engineering Zoomcamp hosted by DataTalks.Club. It introduces the four instructors for the course - Ankush Khanna, Sejal Vaidya, Victoria Perez Mola, and Alexey Grigorev. The 10-week course will cover topics like data ingestion, data warehousing with BigQuery, analytics engineering with dbt, batch processing with Spark, streaming with Kafka, and a culminating 3-week student project. Pre-requisites include experience with Python, SQL, and the command line. Course materials will be pre-recorded videos and there will be weekly live office hours for support. Students can earn a certificate and compete on a
This document provides an overview and agenda for a workshop on building a full stack GraphQL application using Neo4j AuraDB, Next.js, and Vercel. The agenda includes introductions to Neo4j AuraDB, building GraphQL APIs, Next.js, and deploying to Vercel. Hands-on exercises will have attendees create a Neo4j AuraDB instance, build GraphQL APIs backed by Neo4j, develop a Next.js frontend application, and deploy the full stack application to Vercel.
Building Fullstack Serverless GraphQL APIs In The CloudNordic APIs
Follow along as we build a GraphQL API using Apollo and Neo4j Database. We’ll show how to leverage the scale of serverless for our GraphQL API and how to take advantage of 3rd party services like Auth0 for handling authentication and authorization in our GraphQL app.
Using Graph and Transformer Embeddings for Vector Based RetrievalSujit Pal
For the longest time, term-based vector representations based on whole-document statistics, such as TF-IDF, have been the staple of efficient and effective information retrieval. The popularity of Deep Learning over the past decade has resulted in the development of many interesting embedding schemes. Like term-based vector representations, these embeddings depend on structure implicit in language and user behavior. Unlike them, they leverage the distributional hypothesis, which states that the meaning of a word is determined by the context in which it appears. These embeddings have been found to better encode the semantics of the word, compared to term-based representations. Despite this, it has only recently become practical to use embeddings in Information Retrieval at scale.
In this presentation, we will describe how we applied two new embedding schemes to Scopus, Elsevier’s broad coverage database of scientific, technical, and medical literature. Both schemes are based on the distributional hypothesis but come from very different backgrounds. The first embedding is a graph embedding called node2vec, that encodes papers using citation relationships between them as specified by their authors. The second embedding leverages Transformers, a recent innovation in the area of Deep Learning, that are essentially language models trained on large bodies of text. These two embeddings exploit the signal implicit in these data sources and produce semantically rich user and content-based vector representations respectively. We will evaluate these embedding schemes and describe how we used the Vespa search engine to search these embeddings for similar documents within the Scopus dataset. Finally, we will describe how RELX staff can access these embeddings for their own data science needs, independent of the search application.
The presentation consist of the following,
- What is graph DB ?
- Why choose Graph DB ?
- Types of Graph DB (Based on storage)
- Janus Graph Architecture
- Janus Graph Basic Terms
- Conceptual working of Gremlin Queries.
- Setup Janus Graph on local
- Some sample queries
- Schema and data modelling
- Automatic Schema Maker
- Index in Janus Graph
The document discusses a presentation about connecting data and Neo4j. It covers data ecosystems and where different technologies fit, how Neo4j works as a graph database, and building graph-native organizations. It also discusses Neo4j's long term vision of connecting enterprise data and the state of data in 2018. Key points include how data structures have evolved from hierarchies to dynamic knowledge graphs and how different technologies like relational databases and Neo4j are suited for different types of queries and connected data problems.
These webinar slides are an introduction to Neo4j and Graph Databases. They discuss the primary use cases for Graph Databases and the properties of Neo4j which make those use cases possible. They also cover the high-level steps of modeling, importing, and querying your data using Cypher and touch on RDBMS to Graph.
This document discusses building a full stack graph application with Neo4j AuraDB, GitHub Actions, GraphQL, Next.js, and Vercel. It covers how to get data into Neo4j, build a GraphQL API with Neo4j and the GraphQL library, work with graphs on the frontend using GraphQL and React, and deploy the full application to Vercel. Code examples and resources are provided for each part of the process.
NiFi Best Practices for the EnterpriseGregory Keys
The document discusses best practices for implementing Apache NiFi in an enterprise. It recommends establishing a Center of Excellence (COE) to align stakeholders, provide guidance, and develop standards and processes for NiFi deployment. The COE should work with business leaders to understand data flow needs and ensure NiFi is delivering business value. When scaling NiFi across a large enterprise, it may make sense to have multiple semi-autonomous NiFi clusters for different business groups rather than one large cluster. Reusable templates, components, and patterns can help with development efficiencies.
The talk covers the following topics:
* Fundamental parts of a GraphQL server
* Defining API shape - GraphQL schema
* Resolving object fields
* Mutative APIs
* Making requests to a GraphQL server
* Solving N+1 query problem: dataloader
How to Utilize MLflow and Kubernetes to Build an Enterprise ML PlatformDatabricks
This document summarizes a presentation about utilizing MLFlow and Kubernetes to build an enterprise machine learning platform. It discusses challenges that motivated building such a platform, like lack of model management and difficult deployments. The solution presented abstracts data pipelines into modular components to standardize workflows. It also uses MLFlow to package and track models and experiments, and Kubernetes with Kubeflow to deploy models at scale. A demo shows implementing model serving with these tools.
Graph Databases and Machine Learning | November 2018TigerGraph
Graph Database and Machine Learning: Finding a Happy Marriage. Graph Databases and Machine Learning
both represent powerful tools for getting more value from data, learn how they can form a harmonious marriage to up-level machine learning.
This document provides an overview of a Neo4j basic training session. The training will cover querying graph patterns with Cypher, designing and implementing a graph database model, and evolving existing graphs to support new requirements. Attendees will learn about graph modeling concepts like nodes, relationships, properties and labels. They will go through a modeling workflow example of developing a graph model to represent airport connectivity data from a CSV file and querying the resulting graph.
The document discusses 10 tips and tricks for tuning Cypher queries in Neo4j. It covers using PROFILE to analyze query plans, avoiding unnecessary property reads, elevating properties to labels when possible, effectively using indexes and constraints, handling relationships efficiently, leveraging lists and map projections, implementing pattern comprehensions and subqueries, batching updates, and using user-defined procedures and functions. The final slides provide examples of special slide formatting and include placeholders for images or logos.
VictoriaLogs: Open Source Log Management System - PreviewVictoriaMetrics
VictoriaLogs Preview - Aliaksandr Valialkin
* Existing open source log management systems
- ELK (ElasticSearch) stack: Pros & Cons
- Grafana Loki: Pros & Cons
* What is VictoriaLogs
- Open source log management system from VictoriaMetrics
- Easy to setup and operate
- Scales vertically and horizontally
- Optimized for low resource usage (CPU, RAM, disk space)
- Accepts data from Logstash and Fluentbit in Elasticsearch format
- Accepts data from Promtail in Loki format
- Supports stream concept from Loki
- Provides easy to use yet powerful query language - LogsQL
* LogsQL Examples
- Search by time
- Full-text search
- Combining search queries
- Searching arbitrary labels
* Log Streams
- What is a log stream?
- LogsQL examples: querying log streams
- Stream labels vs log labels
* LogsQL: stats over access logs
* VictoriaLogs: CLI Integration
* VictoriaLogs Recap
Flink Streaming is the real-time data processing framework of Apache Flink. Flink streaming provides high level functional apis in Scala and Java backed by a high performance true-streaming runtime.
This document provides an overview of Apache Flink internals. It begins with an introduction and recap of Flink programming concepts. It then discusses how Flink programs are compiled into execution plans and executed in a pipelined fashion, as opposed to being executed eagerly like regular code. The document outlines Flink's architecture including the optimizer, runtime environment, and data storage integrations. It also covers iterative processing and how Flink handles iterations both by unrolling loops and with native iterative datasets.
Get Started with the Most Advanced Edition Yet of Neo4j Graph Data ScienceNeo4j
The document discusses Neo4j's graph data science capabilities. It highlights that Neo4j provides tools for graph algorithms, machine learning pipelines for tasks like node classification and link prediction, and a graph catalog for managing graph projections from the underlying database. The document also notes that Neo4j's capabilities allow users to leverage relationships in connected data to answer business questions.
Knowledge Graphs and Generative AI
Dr. Katie Roberts, Data Science Solutions Architect, Neo4j
It’s no secret that Large Language Models (LLMs) are popular right now, especially in the age of Generative AI. LLMs are powerful models that enable access to data and insights for any user, regardless of their technical background, however, they are not without challenges. Hallucinations, generic responses, bias, and a lack of traceability can give organizations pause when thinking about how to take advantage of this technology. Graphs are well suited to ground LLMs as they allow you to take advantage of relationships within your data that are often overlooked with traditional data storage and data science approaches. Combining Knowledge Graphs and LLMs enables contextual and semantic information retrieval from both structured and unstructured data sources. In this session, you’ll learn how graphs and graph data science can be incorporated into your analytics practice, and how a connected data platform can improve explainability, accuracy, and specificity of applications backed by foundation models.
This document discusses using Apache Kafka as a data hub to capture changes from various data sources using change data capture (CDC). It outlines several common CDC patterns like using modification dates, database triggers, or log files to identify changes. It then discusses using Kafka Connect to integrate various data sources like MongoDB, PostgreSQL and replicate changes. The document provides examples of open source CDC connectors and concludes with suggestions for getting involved in the Apache Kafka community.
The document outlines the plan and syllabus for a Data Engineering Zoomcamp hosted by DataTalks.Club. It introduces the four instructors for the course - Ankush Khanna, Sejal Vaidya, Victoria Perez Mola, and Alexey Grigorev. The 10-week course will cover topics like data ingestion, data warehousing with BigQuery, analytics engineering with dbt, batch processing with Spark, streaming with Kafka, and a culminating 3-week student project. Pre-requisites include experience with Python, SQL, and the command line. Course materials will be pre-recorded videos and there will be weekly live office hours for support. Students can earn a certificate and compete on a
This document provides an overview and agenda for a workshop on building a full stack GraphQL application using Neo4j AuraDB, Next.js, and Vercel. The agenda includes introductions to Neo4j AuraDB, building GraphQL APIs, Next.js, and deploying to Vercel. Hands-on exercises will have attendees create a Neo4j AuraDB instance, build GraphQL APIs backed by Neo4j, develop a Next.js frontend application, and deploy the full stack application to Vercel.
Building Fullstack Serverless GraphQL APIs In The CloudNordic APIs
Follow along as we build a GraphQL API using Apollo and Neo4j Database. We’ll show how to leverage the scale of serverless for our GraphQL API and how to take advantage of 3rd party services like Auth0 for handling authentication and authorization in our GraphQL app.
Full Stack Development with Neo4j and GraphQLNeo4j
The document discusses using GraphQL with Neo4j databases. It provides an overview of GraphQL, how to build a GraphQL service including defining a schema and implementing resolver functions. It also discusses Neo4j-GraphQL integrations which can automatically generate Cypher queries from GraphQL, improving performance by batching data fetching and allowing exposing Cypher queries through GraphQL directives. The neo4j-graphql library facilitates integrating Neo4j with GraphQL services.
Sashko Stubailo - The GraphQL and Apollo Stack: connecting everything togetherReact Conf Brasil
Apresentado na React Conf Brasil, em São Paulo, 7 de Outubro de 2017 #reactconfbr
I’ve been exploring the space of declarative developer tools and frameworks for over five years. Most recently, I was the founding member of the Apollo project at Meteor Development Group. My greatest passion is to make software development simpler, and enable more people to create software to bring good to the world.
https://medium.com/@stubailo
@stubailo
- Patrocínio: Pipefy, Globo.com, Meteor, Apollo, Taller, Fullcircle, Quanto, Udacity, Cubos, Segware, Entria
- Apoio: Concrete, Rung, LuizaLabs, Movile, Rivendel, GreenMile, STQ, Hi Platform
- Promoção: InfoQ, DevNaEstrada, CodamosClub, JS Ladies, NodeBR, Training Center, BrazilJS, Tableless, GeekHunter
- Afterparty: An English Thing
GraphQL is a wonderful abstraction for describing and querying data. Apollo is an ambitious project to help you build apps with GraphQL. In this talk, we'll go over how all the parts—Client, Server, Dev Tools, Codegen, and more—create an end-to-end experience for building apps on top of any data.
## Detailed description
In today's development ecosystem, there are tons of options for almost every part of your application development process: UI rendering, styling, server side rendering, build systems, type checking, databases, frontend data management, and more. However, there's one part of the stack that hasn't gotten as much love in the last decade, because it usually falls in the cracks between frontend and backend developers: Data fetching.
The most common way to load data in apps today is to use a REST API on the server and manage the data manually on the client. Whether you're using Redux, MobX, or something else, you're usually doing everything yourself—deciding when to load data, how to keep it fresh, updating the store after sending updates to the server, and more. But if you're trying to develop the best user experience for your app, all of that gets in the way; you shouldn't have to become a systems engineer to create a great frontend. The Apollo project is based on the belief that data loading doesn't have to be complicated; instead, you should be able to easily get the data you want, when you want it, and it should be managed for you just like React manages updating your UI.
Because data loading touches both the frontend and backend of your app, GraphQL and Apollo have to include many parts to fulfill that promise of being able to seamlessly connect your data together. First, we need client libraries not only for React and JavaScript, but also for native iOS and Android. Then, we must bring server-side support for GraphQL queries, mutations, and most recently subscriptions to every server technology and make those servers easier to write. And finally, we want not only all of the tools that people are used to with REST APIs, but many more thanks to all of the capabilities enabled by GraphQL.
In this talk, we'll go over all of the parts of a GraphQL-oriented app architecture, and how different GraphQL and Apollo technologies come together to solve all of the parts of data loading and management for React developers.
Building Fullstack Graph Applications With Neo4j Neo4j
This document provides an overview of graph databases and algorithms using Neo4j. It discusses Neo4j's built-in graph algorithms for pathfinding, centrality, community detection, similarity and link prediction. It also covers Neo4j Streams for real-time graph processing and integrations with Kafka. Grandstack and Neo4j-GraphQL are presented as options for building GraphQL APIs on Neo4j.
Tutorial: Building a GraphQL API in PHPAndrew Rota
This document discusses building a GraphQL API in PHP. It provides an overview of GraphQL concepts like queries, fields, types and schemas. It then outlines the steps to build a GraphQL API in PHP using the graphql-php library:
1. Define object types and the Query root type in the schema
2. Initialize the GraphQL schema instance
3. Execute GraphQL queries against the schema and return the result
By following these steps, one can build an API for querying a database of PHP conferences and speakers to understand how to build GraphQL APIs in PHP.
Recent presentation on deeplearning4j's new features as well as some underused features of the AI framework like arbiter,datavec's transform process and libnd4j.
GraphQL and Neo4j - Simple and Intelligent Modern AppsNeo4j
This document discusses using GraphQL and Neo4j together for building modern applications. It notes that the Neo4j GraphQL library allows for low-code, secure and flexible querying of graph data from Neo4j using GraphQL. An example is provided showing how a GraphQL query is translated to a Cypher query to retrieve movie title and runtime data from Neo4j.
Neo4j APOC is a blessing for the developers. It provides many predefined procedures or user-defined functions/views so that we can easily use it and improve our productivity in very simple manner. APOC stands for ‘Awesome Procedure On Cypher‘. APOC is a library of procedure for the various areas. It is introduced with the Neo4j 3.0 and currently contains 250+ libraries.
How easy (or hard) it is to monitor your graph ql service performanceRed Hat
- GraphQL performance monitoring can be challenging as queries can vary significantly even when requesting the same data. Traditional endpoint monitoring provides little insight.
- Distributed tracing using OpenTracing allows tracing queries to monitor performance at the resolver level. Tools like Jaeger and plugins for Apollo Server and other GraphQL servers can integrate tracing.
- A demo showed using the Apollo OpenTracing plugin to trace a query through an Apollo server and resolver to an external API. The trace data was sent to Jaeger for analysis to help debug performance issues.
4-year chronicles of ALLSTOCKER (a trading platform for used construction equipment and machinery). We describe how the system has evolved incrementally using Pharo smalltalk.
GraphQL is a query language for APIs that allows clients to request specific data rather than entire resources. It addresses common problems with REST APIs like over-fetching data. Several GraphQL libraries exist for Java including graphql-java, graphql-java-kickstart, graphql-dgs-framework, and spring-graphql. These libraries differ in features like code generation, error handling, testing support, and available clients. Spring GraphQL is considered the most mature option as it integrates directly with Spring Boot and allows testing all code.
GraphQL is query language for APIs, but what are the advantages and how would one implement such in their microservices/APIs. In this session, I will go through the basics of GraphQL, different aspects of GraphQL and architecture of such APIs. How 4 different ways we can implement GraphQL for a Springboot microservice/API.
All About GRAND Stack: GraphQL, React, Apollo, and Neo4j (Mark Needham) - Gre...GreeceJS
In this presentation, we explore application development using the GRAND stack (GraphQL, React, Apollo, Neo4j) for building web applications backed by a graph database. We will review the components to build a simple web application, including how to build a React component, an introduction to JSX, an overview of GraphQL and why it is a game-changer for front-end development. We'll learn how to model, store, and query data in the Neo4j graph database using GraphQL to power our web application.
This week I had fun with the online meetup on similarity algorithms with Tomaz Bratanic. I came across a great post written by Adrien Sales showing how to analyse PostgreSQL metadata using Neo4j and learned a neat approach to ingesting data into Neo4j using Kafka Streams and GraphQL.
This document provides a list of React code samples and tutorials for intermediate React developers. It includes 10 React code samples that use tools like GraphQL, Flux, and Redux. It also provides step-by-step instructions for setting up sample projects that combine React with Node, D3, GraphQL, SQLite, and Angular 2. Additionally, the document defines key concepts like Flux, Redux, Relay and GraphQL and compares REST APIs to GraphQL.
This document summarizes and compares several GraphQL libraries for Java: graphql-java, graphql-java-kickstart, and dgs-framework. It discusses their features for defining schemas and types, handling data fetching and caching, performing mutations, handling errors, testing functionality, and code generation capabilities. Overall, dgs-framework requires the least amount of boilerplate code, supports testing and code generation more fully, and is designed specifically for use within Spring Boot applications.
Graphs & GraphRAG - Essential Ingredients for GenAINeo4j
Knowledge graphs are emerging as useful and often necessary for bringing Enterprise GenAI projects from PoC into production. They make GenAI more dependable, transparent and secure across a wide variety of use cases. They are also helpful in GenAI application development: providing a human-navigable view of relevant knowledge that can be queried and visualised.
This talk will share up-to-date learnings from the evolving field of knowledge graphs; why more & more organisations are using knowledge graphs to achieve GenAI successes; and practical definitions, tools, and tips for getting started.
Discover how Neo4j-based GraphRAG and Generative AI empower organisations to deliver hyper-personalised customer experiences. Explore how graph-based knowledge empowers deep context understanding, AI-driven insights, and tailored recommendations to transform customer journeys.
Learn actionable strategies for leveraging Neo4j and Generative AI to revolutionise customer engagement and build lasting relationships.
GraphTalk New Zealand - The Art of The Possible.pptxNeo4j
Discover firsthand how organisations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimising supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
In this presentation, ANZ will be sharing their journey towards AI-enabled data management at scale. The session will explore how they are modernising their data architecture to support advanced analytics and decision-making. By leveraging a knowledge graph approach, they are enhancing data integration, governance, and discovery, breaking down silos to create a unified view across diverse data sources. This enables AI applications to access and contextualise information efficiently, and drive smarter, data-driven outcomes for the bank. They will also share lessons they are learning and key steps for successfully implementing a scalable, AI-ready data framework.
Google Cloud Presentation GraphSummit Melbourne 2024: Building Generative AI ...Neo4j
GenerativeAI is taking the world by storm while traditional ML maturity and successes continue to accelerate across AuNZ . Learn how Google is working with Neo4J to build a ML foundation for trusted, sustainable, and innovative use cases.
Telstra Presentation GraphSummit Melbourne: Optimising Business Outcomes with...Neo4j
This session will highlight how knowledge graphs can significantly enhance business outcomes by supporting the Data Mesh approach. We’ll discuss how knowledge graphs empower organisations to create and manage data products more effectively, enabling a more agile and adaptive data strategy. By leveraging knowledge graphs, businesses can better organise and connect their data assets, driving innovation and maximising the value derived from their data, ultimately leading to more informed decision-making and improved business performance.
Building Smarter GenAI Apps with Knowledge Graphs
While GenAI offers great potential, it faces challenges with hallucination and limited domain knowledge. Graph-powered retrieval augmented generation (GraphRAG) helps overcome these challenges by integrating vector search with knowledge graphs and data science techniques. This approach improves context, enhances semantic understanding, enables personalisation, and facilitates real-time updates.
In this workshop, you’ll explore detailed code examples to kickstart your journey with GenAI and graphs. You’ll leave with practical skills you can immediately apply to your own projects.
How Siemens bolstered supply chain resilience with graph-powered AI insights ...Neo4j
In this captivating session, Siemens will reveal how Neo4j’s powerful graph database technology uncovers hidden data relationships, helping businesses reach new heights in IT excellence. Just as organizations often face unseen barriers, your business may be missing critical insights buried in your data. Discover how Siemens leverages Neo4j to enhance supply chain resilience, boost sustainability, and unlock the potential of AI-driven insights. This session will demonstrate how to navigate complexity, optimize decision-making, and stay ahead in a constantly evolving market.
Knowledge Graphs for AI-Ready Data and Enterprise Deployment - Gartner IT Sym...Neo4j
Knowledge graphs are emerging as useful and often necessary for bringing Enterprise GenAI projects from PoC into production. They make GenAI more dependable, transparent and secure across a wide variety of use cases. They are also helpful in GenAI application development: providing a human-navigable view of relevant knowledge that can be queried and visualised. This talk will share up-to-date learnings from the evolving field of knowledge graphs; why more & more organisations are using knowledge graphs to achieve GenAI successes; and practical definitions, tools, and tips for getting started.
A tailored CRM that helps insurance agents streamline interactions, enhance engagement, and drive growth through automation and centralized data. Visit https://www.damcogroup.com/insurance/crm-software for more details!
This presentation provides an overview of Agentic AI, intelligent systems capable of autonomous decision-making and action with minimal supervision. It highlights how these systems interact with their environment, learn over time, and adapt to complex situations. The content outlines the transformative potential of Agentic AI across industries and emphasizes the importance of understanding its core features and challenges.
And overview of Nasdanika Models and their applicationsPavel Vlasov
This presentation provides an overview of Nasdanika metamodels and their applications - reference documentation, analysis, code generation, use with GenAI operating on complex structures instead of text - humans don't think in text, they think in images (diagrams) - objects and their relationships. Translating human thoughts to text is an "expensive" and error prone process. And this is where diagramming, modeling, and generation of textual description from a model can help humans and GenAI to communicate better.
How to Create a White Label Crypto Exchange.pdfzak jasper
This comprehensive guide will walk you through the 15 essential steps to develop your White Label Exchange, providing insights, tips, and crucial considerations for a successful venture.
How a Staff Augmentation Company IN USA Powers Flutter App Breakthroughs.pdfmary rojas
With local teams and talent aligned with U.S. business hours, a staff augmentation company in the USA enables real-time communication, faster decision-making, and better project coordination. This ensures smoother workflows compared to offshore-only models, especially for companies requiring tight collaboration.
AI Alternative - Discover the best AI tools and their alternativesAI Alternative
AIAlternative.co is a comprehensive directory designed to help users discover, compare, and evaluate AI tools across various domains. Its primary goal is to assist individuals and businesses in finding the most suitable AI solutions tailored to their specific needs.
Key Features
- Curated AI Tool Listings: The platform offers detailed information on a wide range of AI tools, including their functionalities, use cases, and alternatives. This allows users to make informed decisions based on their requirements.
- Alternative Suggestions: For each listed AI tool, aialternative.co provides suggestions for similar or alternative tools, facilitating easier comparison and selection.
- Regular Updates: The directory is consistently updated to include the latest AI innovations, ensuring users have access to the most current tools available in the market.
Browse All Tools here: https://aialternative.co/
A Claims Processing System enhances customer satisfaction, efficiency, and compliance by automating the claims lifecycle—enabling faster settlements, fewer errors, and greater transparency. Explore More - https://www.damcogroup.com/insurance/claims-management-software
Custom Software Development: Types, Applications and Benefits.pdfDigital Aptech
Discover the different types of custom software, their real-world applications across industries, and the key benefits they offer. Learn how tailored solutions improve efficiency, scalability, and business performance in this comprehensive overview.
Autoposting.ai Sales Deck - Skyrocket your LinkedIn's ROIUdit Goenka
1billion people scroll, only 1 % post…
That’s your opening to hijack LinkedIn—and Autoposting.ai is the unfair weapon Slideshare readers are hunting for…
LinkedIn drives 80 % of social B2B leads, converts 2× better than every other network, yet 87 % of pros still choke on the content hamster-wheel…
They burn 25 h a month writing beige posts, miss hot trends, then watch rivals scoop the deals…
Enter Autoposting.ai, the first agentic-AI engine built only for LinkedIn domination…
It spies on fresh feed data, cracks trending angles before they peak, and spins voice-perfect thought-leadership that sounds like you—not a robot…
Slides in play:
• 78 % average engagement lift in 90 days…
• 3.2× qualified-lead surge over manual posting…
• 42 % marketing time clawed back, week after week…
Real users report 5-8× ROI inside the first quarter, some crossing $1 M ARR six months faster…
Why does it hit harder than Taplio, Supergrow, generic AI writers?
• Taplio locks key features behind $149+ tiers… Autoposting gives you everything at $29…
• Supergrow churns at 20 % because “everyone” is no-one… Autoposting laser-targets • • LinkedIn’s gold-vein ICPs and keeps them glued…
• ChatGPT needs prompts, edits, scheduling hacks… Autoposting researches, writes, schedules—and optimizes send-time in one sweep…
Need social proof?
G2 reviews scream “game-changer”… Agencies slash content production 80 % and triple client capacity… CXOs snag PR invites and investor DMs after a single week of daily posts… Employee advocates hit 8× reach versus company pages and pump 25 % more SQLs into the funnel…
Feature bullets for the skim-reader:
• Agentic Research Engine—tracks 27+ data points, finds gaps your rivals ignore…
• Real Voice Match—your tone, slang, micro-jokes, intact…
• One-click Multiplatform—echo winning posts to Twitter, Insta, Facebook…
• Team Workspaces—spin up 10 seats without enterprise red tape…
• AI Timing—drops content when your buyers actually scroll, boosting first-hour velocity by up to 4×…
Risk? Zero…
Free 7-day trial, 90-day results guarantee—hit 300 % ROI or walk away… but the clock is ticking while competitors scoop your feed…
So here’s the ask:
Swipe down, smash the “Download” or “Try Now” button, and let Autoposting.ai turn Slideshare insights into pipeline—before today’s trending topic vanishes…
The window is open… How loud do you want your LinkedIn megaphone?
Delivering More with Less: AI Driven Resource Management with OnePlan OnePlan Solutions
Delivering more with less is an age-old problem. Smaller budgets, leaner teams, and greater uncertainty make the path to success unclear. Combat these issues with confidence by leveraging the best practices that help PMOs balance workloads, predict bottlenecks, and ensure resources are deployed effectively, using OnePlan’s AI forecasting capabilities, especially when organizations must deliver more with fewer people.
Top 10 Mobile Banking Apps in the USA.pdfLL Technolab
📱💸 Top Mobile Banking Apps in the USA!
Are you thinking to invest in mobile banking apps in USA? If yes, then explore this infographic and know the top 10 digital banking apps which creating ripples in USA. From seamless money transfers to powerful budgeting tools, these apps are redefining how America banks on the go.
How John started to like TDD (instead of hating it) (ViennaJUG, June'25)Nacho Cougil
Let me share a story about how John (a developer like any other) started to understand (and enjoy) writing Tests before the Production code.
We've all felt an inevitable "tedium" when writing tests, haven't we? If it's boring, if it's complicated or unnecessary? Isn't it? John thought so too, and, as much as he had heard about writing tests before production code, he had never managed to put it into practice, and even when he had tried, John had become even more frustrated at not understanding how to put it into practice outside of a few examples katas 🤷♂️
Listen to this story in which I will explain how John went from not understanding Test Driven Development (TDD) to being passionate about it... so much that now he doesn't want to work any other way 😅 ! He must have found some benefits in practising it, right? He says he has more advantages than working in any other way (e.g., you'll find defects earlier, you'll have a faster feedback loop or your code will be easier to refactor), but I'd better explain it to you in the session, right?
PS: Think of John as a random person, as if he was even the speaker of this talk 😉 !
---
Presentation shared at ViennaJUG, June'25
Feedback form:
https://bit.ly/john-like-tdd-feedback
Marketing And Sales Software Services.pptxjulia smits
Marketing and Sales Software Services refer to digital solutions designed to streamline, automate, and enhance a company’s marketing campaigns and sales processes. These services include tools for customer relationship management (CRM), email marketing, lead generation, sales analytics, campaign tracking, and more—helping businesses attract, engage, and convert prospects more efficiently.
Choosing an authorized Microsoft reseller ensures that your business gets authentic software, professional licensing guidance, and constant technical support.Certified resellers offer secure deployment, compliance with Microsoft standards, and tailored cloud solutions — helping businesses maximize ROI, reduce risks, and stay up to date with the latest Microsoft innovations.
UberEats clone app Development TechBuilderTechBuilder
Our food delivery app development solutions are designed to cater to varied business models, whether you are a startup aiming to scale, an enterprise-class business, or a niche player. With scalability, easy-to-use interfaces, and powerful AI capabilities, our solutions scale with your vision.
For more Please Visit Here : https://techbuilder.ai/food-delivery-app-development/
The Engineering, Procurement, and Construction (EPC) industry is highly complex, involving multiple stakeholders, high-value procurement, strict timelines, and resource-heavy project execution. In such a demanding environment, using the right ERP system is not a luxury—it's a necessity.
This presentation highlights the Top 5 Odoo ERP modules specifically tailored to meet the dynamic needs of the EPC sector. Whether you're managing large-scale infrastructure projects or specialized engineering contracts, Odoo provides an integrated solution that can streamline your entire project lifecycle.
🔍 What’s Inside:
Key challenges faced by EPC companies
Overview of essential Odoo modules
Real-world benefits of using Project, Purchase, Inventory, Field Service, and Accounting modules
How these modules contribute to cost control, real-time visibility, and operational efficiency
This presentation is designed for EPC business owners, project managers, procurement heads, and field service teams who are exploring digital transformation through Odoo ERP.
Scalefusion Remote Access for Apple DevicesScalefusion
🔌Tried restarting.
🔁Then updating.
🔎Then Googled a fix.
And then it crashed.
Guess who has to fix it? You. And who’ll help you? - Scalefusion.
Scalefusion steps in with real-time access, not just remote hope. Support for Apple devices that support you (and them) to do more.
For more: https://scalefusion.com/remote-access-software-mac
https://scalefusion.com/es/remote-access-software-mac
https://scalefusion.com/fr/remote-access-software-mac
https://scalefusion.com/pt-br/remote-access-software-mac
https://scalefusion.com/nl/remote-access-software-mac
https://scalefusion.com/de/remote-access-software-mac
https://scalefusion.com/ru/remote-access-software-mac
AI-ASSISTED METAMORPHIC TESTING FOR DOMAIN-SPECIFIC MODELLING AND SIMULATIONmiso_uam
AI-ASSISTED METAMORPHIC TESTING FOR DOMAIN-SPECIFIC MODELLING AND SIMULATION (plenary talk at ANNSIM'2025)
Testing is essential to improve the correctness of software systems. Metamorphic testing (MT) is an approach especially suited when the system under test lacks oracles, or they are expensive to compute. However, building an MT environment for a particular domain (e.g., cloud simulation, automated driving simulation, production system simulation, etc) requires substantial effort.
To alleviate this problem, we propose a model-driven engineering approach to automate the construction of MT environments, which is especially useful to test domain-specific modelling and simulation systems. Starting from a meta-model capturing the domain concepts, and a description of the domain execution environment, our approach produces an MT environment featuring comprehensive support for the MT process. This includes the definition of domain-specific metamorphic relations, their evaluation, detailed reporting of the testing results, and the automated search-based generation of follow-up test cases.
In this talk, I presented the approach, along with ongoing work and perspectives for integrating intelligence assistance based on large language models in the MT process. The work is a joint collaboration with Pablo Gómez-Abajo, Pablo C. Cañizares and Esther Guerra from the miso research group and Alberto Núñez from UCM.
Climate-Smart Agriculture Development Solution.pptxjulia smits
A technology-driven solution designed to promote sustainable, resilient, and productive farming practices. It integrates data, smart tools, and climate insights to help farmers adapt to climate change, optimize resource use, and boost yields sustainably.
6. What Is GraphQL?
GraphQL is an API query language and runtime for fulfilling those queries.
GraphQL uses a type system to define the data available in the API, including what
entities and attributes (types and fields in GraphQL parlance) exist and how types
are connected (the data graph).
GraphQL operations (queries, mutations, or subscriptions) specify an entry-point and
a traversal of the data graph (the selection set) which defines what fields to be
returned by the operation.
graphql.org
7. GraphQL Concepts - Type Definitions
GraphQL type definitions define the
data available in the API.
These type definitions are typically
defined using the GraphQL Schema
Definition Language (SDL), a
language-agnostic way of expressing
the types.
However, type definitions can be also
be defined programmatically.
8. GraphQL Concepts - GraphQL Operations
Each GraphQL operation is either
a Query, Mutation, or
Subscription.
9. GraphQL Concepts - GraphQL Operations
Each GraphQL operation is either
a Query, Mutation, or
Subscription.
The fields of the Query, Mutation,
and Subscription types define the
entry points for an operation.
Each operation starts at the field
of one of these types.
Entry point &
arguments
10. GraphQL Concepts - Selection Set
The selection set specifies the
fields to be returned by a
GraphQL operation.
Can be thought of as a traversal
through the data graph.
Selection set
11. GraphQL Concepts - Selection Set
The response to a GraphQL operation matches the shape of the
selection set, returning on the data requested.
Selection set
12. GraphQL Concepts - Resolver Functions
GraphQL resolvers are the
functions responsible for actually
fulfilling the GraphQL operation.
In the context of a query, this
means fetching data from a data
layer.
NOTE: The Neo4j GraphQL Library
auto-generates resolver functions for us,
but this is an important GraphQL concept
to understand
13. Benefits Of GraphQL
● Overfetching - sending less data over the wire
● Underfetching - everything the client needs in a single request
● The GraphQL specification defines exactly what GraphQL is
● Simplify data fetching with component-based data interactions
● "Graphs all the way down" - GraphQL can help unify disparate systems and
focus API interactions on relationships instead of resources.
● Developer productivity - By reasoning about application data as a graph with
a strict type system, developers can focus on building applications.
14. GraphQL Challenges
● Some well understood practices from REST don’t apply
○ HTTP status codes
○ Error handling
○ Caching
● Exposing arbitrary complexity to the client and performance considerations
● The n+1 query problem - the nested nature of GraphQL operations can lead to multiple
requests to the data layer(s) to resolve a request
● Query costing and rate limiting
Best practices and tooling have emerged to address all of the above, however it’s important
to be aware of these challenges.
15. GraphQL Tooling - GraphQL Playground
GraphQL Playground is an
in-browser tool for querying and
exploring GraphQL APIs.
View API documentation using
GraphQL's introspection feature.
16. GraphQL Tooling - GraphQL Playground
Open movies.neo4j-graphql.com
● Explore the "Docs" tab to learn more about the API schema
● Run these GraphQL queries:
Hands On
Exercise
{
movies(options: { limit: 10 }) {
title
actors {
name
}
}
}
{
directors(where: {name:"Robert Redford"}) {
name
directed {
title
plot
}
}
}
● Try modifying the query selection set to return additional fields
○ Try using ctrl+space for auto-complete
○ What can you find?
18. Neo4j Aura Free Tier Setup
Let's create a Neo4j Aura Free instance that we'll use for the rest of the workshop...
Hands-On
Exercise
Once your Neo4j Aura instance is online you'll see the connection string
(neo4j+s://xxxxx.databases.neo4j.io)
Be sure to take note of the generated
password!
It will then take a few moments for your
Neo4j Aura instance to be provisioned.
Sign in to Neo4j Aura:
dev.neo4j.com/aura-login
Select "Create a new
database" button.
Choose the "Free" tier.
Enter a name for your
Neo4j Aura instance
and select "Create
database"
Step 1: Step 2:
Step 3:
20. The Neo4j GraphQL Library
For building Node.js GraphQL APIs with Neo4j.
The fundamental goal of the Neo4j GraphQL Library is to make it easier to
build GraphQL APIs backed by Neo4j.
21. Goals Of The Neo4j GraphQL Library
GraphQL First Development
GraphQL type definitions can drive the database data model, which means we
don’t need to maintain two separate schemas for our API and database.
22. Goals Of The Neo4j GraphQL Library
Auto-generate GraphQL API Operations
With the Neo4j GraphQL Library,
GraphQL type definitions provide the
starting point for a generated API that
includes:
● Query & Mutation types (an API
entrypoint for each type defined in
the schema)
● Ordering
● Pagination
● Complex filtering
● DateTime & Spatial types and
filtering
23. Goals Of The Neo4j GraphQL Library
Generate Cypher From GraphQL Operations
To reduce boilerplate and optimize for performance the Neo4j GraphQL Library
automatically generates a single database query for any arbitrary GraphQL request.
This means the developer does not need to implement resolvers and each GraphQL
operation results in a single roundtrip to the database.
24. Goals Of The Neo4j GraphQL Library
Extend GraphQL With Cypher
To add custom logic beyond CRUD operations, you can use the @cypher
GraphQL schema directive to add computed fields bound to a Cypher query to
the GraphQL schema.
27. Neo4j GraphQL Library Quickstart
Start GraphQL server:
This will start a local GraphQL API and will also serve the GraphQL
Playground IDE for querying the API or exploring documentation using
GraphQL’s introspection feature.
28. Building An Online Bookstore GraphQL API
For the rest of the workshop we will be building
an API for an online bookstore.
First, we need to define our data model.
The graph data modeling process:
1. Identify entities → Nodes
2. What are the attributes of these entities? → Properties
3. How are these entities connected? → Relationships
4. Can you traverse the graph to answer the business
requirements of your application?
29. Setting Up Our Environment
● Open this Codesandbox
● Add your Neo4j Aura connection details to the .env file (NEO4J_URI,
NEO4J_USER, & NEO4J_PASSWORD environment variables)
○ You will need to sign in to Codesandbox to save your updates
● In GraphQL Playground (running in Codesandbox), run the following GraphQL
query (you'll have an empty result set, but shouldn't see any errors):
Hands-On
Exercise
{
books {
title
}
}
30. Neo4j Aura Free Tier Setup
Let's create a Neo4j Aura Free instance that we'll use for the rest of the workshop and connect to our GraphQL API in CodeSandbox
Hands-On
Exercise
Update the Codesandbox .env file with your Neo4j credentials:
Once your Neo4j Aura instance is online you'll see the connection string
(neo4j+s://xxxxx.databases.neo4j.io)
Be sure to take note of the generated
password!
It will then take a few moments for your
Neo4j Aura instance to be provisioned.
Sign in to Neo4j Aura:
dev.neo4j.com/neo4j-aura
Select "Create a new
database" button.
Choose the "Free" tier.
Enter a name for your
Neo4j Aura instance
and select "Create
database"
Step 1: Step 2:
Step 3: Step 4:
31. Neo4j Sandbox Setup
If you have issues with Neo4j Aura you can also use Neo4j Sandbox
Hands-On
Exercise
Update the Codesandbox .env file with your Neo4j credentials:
Take note of your Neo4j Sandbox Bolt URL and password
Sign in to Neo4j
Sandbox:
dev.neo4j.com/sandbox
Select "Blank
Sandbox"
Select "Launch Project"
Step 1: Step 2:
Step 3:
33. Defining A Property Graph Model With GraphQL
Schema Directives
The @relationshipdirective is used to define
relationships.
DateTime and Point scalar types are available and
map to the equivalent native Neo4j database types.
The @timestamp directive is used to indicate the
property will be automatically updated when the node
is created and updated.
The @id directive marks a field as a unique identifier
and enables auto-generation when the node is
created.
More on directives in the documentation.
34. Creating Data - Generated Mutations
mutation {
createBooks(
input: {
isbn: "1492047686"
title: "Graph Algorithms"
price: 37.48
description:
"Practical Examples in Apache Spark and Neo4j"
}
) {
books {
isbn
title
price
description
__typename
}
}
}
35. Creating Data - Generated Mutations
mutation {
createReviews(
input: {
rating: 5
text: "Best overview of graph data science!"
book: { connect: { where: { node: { title: "Graph Algorithms" } } } }
}
) {
reviews {
rating
text
createdAt
book {
title
}
}
}
}
39. Querying With GraphQL - Query Fields
By default, each type defined in the
GraphQL type definitions will have a
GraphQL Query field generated and
added to the Query type as the
pluralized name of the type (for
example the type Movie becomes a
Query field movies). Each query
field is an entry point into the
GraphQL API. Since GraphQL types
are mapped to node labels in
Neo4j, you can think of the Query
field as the starting point for a
traversal through the graph.
40. Querying With GraphQL - Query Fields
The response data matches
the shape of our GraphQL
query - as we add more fields
to the GraphQL selection set
those fields are included in the
response object.
41. A sorting input type is
generated for each type in the
GraphQL type definitions,
allowing for Query results to
be sorted by each field using
the options field argument.
Querying With GraphQL - Sorting & Pagination
Offset-based pagination is available
by passing skip and limit values as
part of the options argument.
"Count queries" allow us to
calculate the total number of pages.
Offset-Based Pagination
42. Querying With GraphQL - Sorting & Pagination
Cursor-based pagination can be used on relationship fields using Relay-style "Connection" types.
See the documentation for more details.
Cursor-Based Pagination
43. Querying With GraphQL - Filtering
Query results can be filtered using
the where argument. Filter inputs
are generated for each field and
expose comparison operators
specific to the type of the field. For
example, for numeric fields filter
input operators include equality,
greater than (_GT), less than (_LT),
etc. String fields expose the
common string comparison
operators such as
_STARTS_WITH, _CONTAINS,
_ENDS_WITH, etc.
44. Querying With GraphQL - Filtering (Nested)
We can also use the where
argument in nested selections
to filter relationships. Here we
are filtering for reviews
created after Jan 1, 2021
using the createdAt_GT filter
input on the createdAt
DateTime type, specifying the
date using the ISO format.
45. Querying With GraphQL - Geo Distance
For Point fields we can filter
results by the distance to
another point. Here we search
for addresses within 1km of a
specified point
46. Querying With GraphQL - Filtering Using Relationships
Let’s look at an example that
applies filtering at the root of our
query, but using a relationship.
Let’s say we want to search for
all orders where the shipTo
address is within 1km of a
certain point. To do that we’ll use
the where argument at the root
of the query (in the orders Query
field), but use a nested input to
specify we want to filter using the
shipTo relationship and the
corresponding Address node.
47. Exercise: Updating The GraphQL Schema
● Update schema.graphql adding Author and Subject types to our
GraphQL schema
● Once updated, write GraphQL mutations to add authors and subjects to
the graph:
Hands-On
Exercise
Title Author(s)
Inspired Marty Cagan
Ross Poldark Winston Graham
Graph Algorithms Mark Needham, Amy E.
Hodler
Title Subject(s)
Inspired Product management, Design
Ross Poldark Historical fiction, Cornwall
Graph Algorithms Graph theory, Neo4j
If you get stuck you can find the solutions in the README.md file in this Codesandbox.
50. 50
Adding Custom Logic To The GraphQL API
Custom Resolvers
● Implement field resolver
function with your custom logic
● Resolver function will be called
after initial data is fetched from
Neo4j
@cypher GraphQL Schema
Directive
● Add custom Cypher statements
to the GraphQL schema
● Single Cypher query is
generated / one round trip to
the database
50
51. Cypher GraphQL Schema Directive
Computed Scalar Field
With the @cypher schema directive in the Neo4j GraphQL Library we can add a field subTotal to our Order type
that includes the logic for traversing to the associated Book nodes and summing the price property value of each
book.
Here we use the extend type syntax of GraphQL SDL but we could also add this field directly to the Order type
definition as well.The @cypher directive takes a single argument statement which is the Cypher statement to be
executed to resolve the field. This Cypher statement can reference the this variable which is the currently
resolved node, in this case the currently resolved Order node.
52. Cypher GraphQL Schema Directive
Computed Scalar Field We can now include the subTotal
field in our selection set to
execute the custom Cypher
query...
53. Cypher GraphQL Schema Directive
Node & Object Fields
In addition to scalar fields we can also use @cypher directive fields on object and
object array fields with Cypher queries that return nodes or objects.
Let’s add a recommended field to the Customer type, returning books the customer
might be interested in purchasing based on their order history and the order history
of other customers in the graph.
54. Cypher GraphQL Schema Directive
Node & Object Fields
Now we can use this recommended
field on the Customer type. Since
recommended is an array of Book
objects we need to select the nested
fields we want to be returned - in this
case the title field.
55. Cypher GraphQL Schema Directive
Field Arguments → Cypher Parameters
Any field arguments declared on a GraphQL field with a Cypher directive are passed
through to the Cypher query as Cypher parameters. Let’s say we want the client to be
able to specify the number of recommendations returned. We’ll add a field argument limit
to the recommended field and reference that in our Cypher query as a Cypher parameter.
56. Cypher GraphQL Schema Directive
Field Arguments → Cypher Parameters
We set a default value of 3 for this
limit argument so that if the value
isn’t specified the limit Cypher
parameter will still be passed to the
Cypher query with a value of 3. The
client can now specify the number
of recommended books to return
57. Cypher GraphQL Schema Directive
Node & Object Fields We can also return a map from our Cypher query
when using the @cypher directive on an object or
object array GraphQL field. This is useful when we
have multiple computed values we want to return or
for returning data from an external data layer.
Let’s add weather data for the order addresses so
our delivery drivers know what sort of conditions to
expect. We’ll query an external API to fetch this data
using the apoc.load.json procedure.
First, we’ll add a type to the GraphQL type
definitions to represent this object (Weather), then
we’ll use the apoc.load.json procedure to fetch data
from an external API and return the current
conditions, returning a map from our Cypher query
that matches the shape of the Weather type.
58. Cypher GraphQL Schema Directive
Node & Object Fields
Now we can include the
currentWeather field on the
Address type in our GraphQL
queries.
59. Cypher GraphQL Schema Directive
Custom Query Fields
We can use the @cypher directive on Query fields to compliment the auto-generated Query fields provided by the Neo4j GraphQL
Library. Perhaps we want to leverage a full-text index for fuzzy matching for book searches?
First, in Neo4j Browser, create the full-text index:
CALL db.index.fulltext.createNodeIndex("bookIndex", ["Book"],["title", "description"])
In Cypher we would search using the index like this:
CALL db.index.fulltext.queryNodes("bookIndex", "garph~")
60. Cypher GraphQL Schema Directive
Custom Query Fields
To take advantage of the full text index in our GraphQL API add a bookSearch field to the
Query type in our GraphQL type definitions which requires a searchString argument that
becomes the full-text search term
61. Cypher GraphQL Schema Directive
Custom Query Fields
And we now have a new entry-point to our GraphQL API allowing for
full-text search of book titles and descriptions.
62. Cypher GraphQL Schema Directive
Custom Mutation Fields
Similar to adding Query fields, we can use @cypher schema directives to add new
Mutation fields. This is useful in cases where we have specific logic we’d like to take into
account when creating or updating data. Here we make use of the MERGE Cypher
clause to avoid creating duplicate Subject nodes and connecting them to books.
64. Cypher GraphQL Schema Directive
Custom Resolvers
Combining the power of Cypher and GraphQL is extremely powerful, however there are bound to be some cases where we want to add custom logic using code by
implementing resolver functions. This might be where we want to fetch data from another database, API, or system. Let’s consider a contrived example where we compute an
estimated delivery date using a custom resolver function.
First, we add an estimatedDelivery field to the Order type, including the @ignore directive which indicates we plan to resolve this field manually and it will not be included in
the generated database queries.
Now it’s time to implement our Order.estimatedDelivery resolver function. Our function simply calculates a random date - but the point is that this can be any custom logic we
choose to define.
65. Cypher GraphQL Schema Directive
Custom Resolvers
And now we can reference the estimatedDelivery field in our GraphQL queries.
When this field is included in the selection instead of trying to fetch this field from
the database, our custom resolver will be executed.
66. Exercise: Cypher Schema Directive
● The similar field on the Book type returns recommended
books.
● How could you modify and improve this Cypher query to find
similar books?
Hands-On
Exercise
68. The @auth Directive
The Neo4j GraphQL Library provides an @auth GraphQL schema directive
that enables us to attach authorization rules to our GraphQL type definitions.
The @auth directive uses JSON Web Tokens (JWTs) for authentication.
Authenticated requests to the GraphQL API will include an authorization
header with a Bearer token attached. For example:
POST / HTTP/1.1
authorization: Bearer
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MD
IyLCJyb2xlcyI6WyJ1c2VyX2FkbWluIiwicG9zdF9hZG1pbiIsImdyb3VwX2FkbWluIl19.IY0LWqgHcjEtOsOw60mqKazhuRFKroSXFQkp
CtWpgQI
content-type: application/json
69. JSON Web Token (JWT)
JWTs are a standard for
representing and
cryptographically verifying claims
(a JSON payload) securely and
are commonly used for
authentication and authorization.
70. The @auth Directive
isAuthenticated
The isAuthenticated rule is the simplest authorization rule we can add. It means that any GraphQL
operation that accesses a field or object protected by the isAuthenticated rule must have a valid JWT
in the request header.
Let’s make use of the isAuthenticated authorization rule in our bookstore GraphQL API to protect the
Subject type. Let’s say we want to make returning a book’s subjects a "premium" feature to
encourage users to sign-up for our application. To do this we’ll make the following addition to our
GraphQL type definitions, extending the Subject type:
72. The @auth Directive
Roles
Roles are the next type of authorization rule that we will explore. A JWT
payload can include an array of "roles" that describe the permissions
associated with the token.
73. The @auth Directive
Allow
A customer must not be able to view orders placed by other customers.
Adding an Allow rule will allow us to protect orders from other nosy customers.
Here we add a rule to the Order type that a customer’s "sub" (the subject)
claim in the JWT must match the username of the customer who placed the
order.
75. The @auth Directive
Allow
Of course we will also allow admins to have access to orders, so let’s update
the rule to also grant access to any requests with the "admin" role
76. The @auth Directive
Where
In the previous example the client was required to filter for orders that the customer had placed. We don’t always
want to expect the client to include this filtering logic in the GraphQL query. In some cases we simply want to
return whatever data the currently authenticated user has access to. For these cases we can use a Where
authorization rule to apply a filter to the generated database queries - ensuring only the data the user has
access to is returned.
We want a user to only be able to view their own customer information. Here we add a rule to the Customer type
that will apply a filter any time the customer type is accessed that filters for the currently authenticated customer
by adding a predicate that matches the username property to the sub claim in the JWT.
77. The @auth Directive
Where
Note that our query doesn’t specify which customer to return - we’re requesting all customers - but we only get back
the customer that we have access to.
78. The @auth Directive
Bind
Bind allows us to specify connections that must exist in the graph when creating or updating
data based on claims in the JWT.
We want to add a rule that when creating a review, the review node is connected to the
currently authenticated customer - we don’t want customers to be writing reviews on behalf
of other users! This rule means the username of the author of a review must match the sub
claim in the JWT when creating or updating reviews
79. The @auth Directive
Bind
If a customer tries to create a review and connect it to a customer other than
themselves the mutation will return an error.
80. The @auth Directive
@cypher Directive Fields
There are two ways to make use of authorization features when using the
@cypher schema directive:
1) Apply the authorization rules isAuthenticated and roles using the @auth
directive.
2) Reference the JWT payload values in the Cypher query attached to a
@cypher schema directive.
Let’s make use of both of those aspects by adding a Query field that returns
personalized recommendations for a customer!
81. The @auth Directive
@cypher Directive Fields
In our Cypher query we’ll have access to a $auth.jwt parameter that represents the payload of the
JWT. We’ll use that value to look up the currently authenticated customer by username, then traverse
the graph to find relevant recommendations based on their purchase history. We’ll also include the
isAuthenticated rule since we only want authenticated customers to use this Query field.
83. Exercise: Authorization
● Open this Codesandbox which includes the authorization rules defined
above
● Using this admin token create a new user and an order for this user
(be sure to include at least one book in the order!):
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJCb2JMb2JsYXc3Njg3Iiwicm9sZXMiOlsiYWRtaW4iXSwiaWF0IjoxNTE2MjM5MDIyf
Q.f2GKIu31gz39fMJwj5_byFCMDPDy3ncdWOIhhqcwBxk
● Generate a JWT token for your new user using jwt.io
○ Be sure to use this JWT secret when creating the token:
dFt8QaYykR6PauvxcyKVXKauxvQuWQTc
● Next, use this token to add a review for the book purchased by this
user.
● Finally, write a query to view the customer’s details, including their
order history and their reviews.
Hands-On
Exercise
84. Other
● Working with unions & interfaces
● The Neo4j GraphQL OGM
● Working with relationship properties
● ...