Development time is wasted as the bulk of the work shifts from adding business features to struggling with the RDBMS. MongoDB, the leading NoSQL database, offers a flexible and scalable solution.
Back to Basics Webinar 4: Advanced Indexing, Text and Geospatial IndexesMongoDB
This is the fourth webinar of a Back to Basics series that will introduce you to the MongoDB database. This webinar will introduce you to the aggregation framework.
MongoDB Java Development - MongoBoston 2010Eliot Horowitz
This document summarizes Java development options for MongoDB, including simple Java usage, the Morphia ORM, concurrency patterns, write concerns, data types, custom encoding/decoding, GridFS for file storage, and running MapReduce jobs on MongoDB data using Hadoop. Code examples are provided for common operations like inserting documents, querying with Morphia, and running a word count MapReduce job.
Webinar: Building Your First App with MongoDB and JavaMongoDB
The document discusses building Java applications that use MongoDB as the database. It covers connecting to MongoDB from Java using the driver, designing schemas for embedded documents and arrays, building Java objects to represent and insert data, and performing basic operations like inserts. The document also mentions using an object-document mapper like Morphia to simplify interactions between Java objects and MongoDB documents.
Benefits of Using MongoDB Over RDBMS (At An Evening with MongoDB Minneapolis ...MongoDB
The document summarizes a presentation on MongoDB given in Minneapolis on March 5, 2015. The agenda included a quick overview of MongoDB, benefits of using MongoDB over relational database management systems (RDBMSs), and updates to MongoDB version 3.0. The presentation compared development using SQL versus MongoDB over several days, showing that adding new fields and data structures like lists was much simpler in MongoDB due to its flexible document-based data model compared to the changes required when using SQL and relational databases.
MongoDB is opensource DB, CRUD with MongoDB is not as same with other DB using SQL statements it can be achieved using NoSQL json queries which i have try explained here.
Webinar: Back to Basics: Thinking in DocumentsMongoDB
New applications, users and inputs demand new types of data, like unstructured, semi-structured and polymorphic data. Adopting MongoDB means adopting to a new, document-based data model.
While most developers have internalized the rules of thumb for designing schemas for relational databases, these rules don't apply to MongoDB. Documents can represent rich data structures, providing lots of viable alternatives to the standard, normalized, relational model. In addition, MongoDB has several unique features, such as atomic updates and indexed array keys, that greatly influence the kinds of schemas that make sense.
In this session, Buzz Moschetti explores how you can take advantage of MongoDB's document model to build modern applications.
Webinar: Working with Graph Data in MongoDBMongoDB
With the release of MongoDB 3.4, the number of applications that can take advantage of MongoDB has expanded. In this session we will look at using MongoDB for representing graphs and how graph relationships can be modeled in MongoDB.
We will also look at a new aggregation operation that we recently implemented for graph traversal and computing transitive closure. We will include an overview of the new operator and provide examples of how you can exploit this new feature in your MongoDB applications.
This document discusses various indexing strategies in MongoDB to help scale applications. It covers the basics of indexes, including creating and tuning indexes. It also discusses different index types like geospatial indexes, text indexes, and how to use explain plans and profiling to evaluate queries. The document concludes with a section on scaling strategies like sharding to scale beyond a single server's resources.
Ms. Chitra Alavani, Head of the Computer Science department at Kaveri College of Arts, Science and Commerce, provides an overview of key concepts for working with MongoDB including creating databases and collections, inserting and querying documents, and performing aggregation.
Database Trends for Modern Applications: Why the Database You Choose Matters MongoDB
Matt Kalan, Senior Solutions Architect, MongoDB
Matt will explain how modern technology requirements have changed the requirements of the database. In order to handle agile development, big data, cloud, APIs, continuous availability, and unlimited scale while lowering costs, new capabilities are required. Do you need to tolerate the impedance mismatch between an object model and the relational model, or is there another way? We will walk through the application development process, to the code level, to compare using an RDBMS with MongoDB.
Beyond the Basics 2: Aggregation Framework MongoDB
The aggregation framework is one of the most powerful analytical tools available with MongoDB.
Learn how to create a pipeline of operations that can reshape and transform your data and apply a range of analytics functions and calculations to produce summary results across a data set.
MongoDB + Java - Everything you need to know Norberto Leite
Learn everything you need to know to get started building a MongoDB-based app in Java. We'll explore the relationship between MongoDB and various languages on the Java Virtual Machine such as Java, Scala, and Clojure. From there, we'll examine the popular frameworks and integration points between MongoDB and the JVM including Spring Data and object-document mappers like Morphia.
Back to Basics: My First MongoDB ApplicationMongoDB
- The document is a slide deck for a webinar on building a basic blogging application using MongoDB.
- It covers MongoDB concepts like documents, collections and indexes. It then demonstrates how to install MongoDB, connect to it using the mongo shell, and insert documents.
- The slide deck proceeds to model a basic blogging application using MongoDB, creating collections for users, articles and comments. It shows how to query, update, and import large amounts of seeded data.
MongoDB Europe 2016 - Graph Operations with MongoDBMongoDB
The popularity of dedicated graph technologies has risen greatly in recent years, at least partly fuelled by the explosion in social media and similar systems, where a friend network or recommendation engine is often a critical component when delivering a successful application. MongoDB 3.4 introduces a new Aggregation Framework graph operator, $graphLookup, to enable some of these types of use cases to be built easily on top of MongoDB. We will see how semantic relationships can be modelled inside MongoDB today, how the new $graphLookup operator can help simplify this in 3.4, and how $graphLookup can be used to leverage these relationships and build a commercially focused news article recommendation system.
Back to Basics Webinar 2: Your First MongoDB ApplicationMongoDB
The document provides instructions for installing and using MongoDB to build a simple blogging application. It demonstrates how to install MongoDB, connect to it using the mongo shell, insert and query sample data like users and blog posts, update documents to add comments, and more. The goal is to illustrate how to model and interact with data in MongoDB for a basic blogging use case.
MongoDB .local Chicago 2019: Practical Data Modeling for MongoDB: TutorialMongoDB
For 30 years, developers have been taught that relational data modeling was THE way to model, but as more companies adopt MongoDB as their data platform, the approaches that work well in relational design actually work against you in a document model design. In this talk, we will discuss how to conceptually approach modeling data with MongoDB, focusing on practical foundational techniques, paired with tips and tricks, and wrapping with discussing design patterns to solve common real world problems.
The document discusses Mongo-Hadoop integration and provides examples of using the Mongo-Hadoop connector to run MapReduce jobs on data stored in MongoDB. It covers loading and writing data to MongoDB from Hadoop, using Java MapReduce, Hadoop Streaming with Python, and analyzing data with Pig and Hive. Examples show processing an email corpus to build a graph of sender-recipient relationships and message counts.
Back to Basics Webinar 5: Introduction to the Aggregation FrameworkMongoDB
The document provides information about an upcoming webinar on the MongoDB aggregation framework. Key details include:
- The webinar will introduce the aggregation framework and provide an overview of its capabilities for analytics.
- Examples will use a real-world vehicle testing dataset to demonstrate aggregation pipeline stages like $match, $project, and $group.
- Attendees will learn how the aggregation framework provides a simpler way to perform analytics compared to other tools like Spark and Hadoop.
Back to Basics Webinar 1: Introduction to NoSQLMongoDB
This document provides an overview of an introduction to NoSQL webinar. It discusses why NoSQL databases were created, the different types of NoSQL databases including key-value stores, column stores, graph stores, multi-model databases and document stores. It provides details on MongoDB, describing how MongoDB stores data as JSON-like documents with dynamic schemas and supports features like indexing, aggregation and geospatial queries. The webinar agenda is also outlined.
Indexes are references to documents that are efficiently ordered by key and maintained in a tree structure for fast lookup. They improve the speed of document retrieval, range scanning, ordering, and other operations by enabling the use of the index instead of a collection scan. While indexes improve query performance, they can slow down document inserts and updates since the indexes also need to be maintained. The query optimizer aims to select the best index for each query but can sometimes be overridden.
Back to Basics Webinar 3: Schema Design Thinking in DocumentsMongoDB
This is the third webinar of a Back to Basics series that will introduce you to the MongoDB database. This webinar will explain the architecture of document databases.
Conceptos básicos. Seminario web 4: Indexación avanzada, índices de texto y g...MongoDB
Este es el cuarto seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. Este seminario se ve en la compatibilidad con índices de texto libre y geoespaciales.
Engineers often ask "how do I know if I should build my application on MongoDB?" IT executives ask a similar question, "which applications in my application portfolio should I migrate to MongoDB?" This presentation will present a framework for answering these questions.
We will cover two sets of criteria: (1) how to determine when to migrate a legacy application to MongoDB and (2) when should MongoDB be used for new applications? The presentation will also include a brief introduction to MongoDB to provide enough MongoDB technical background for analyzing when to use MongoDB?
Learning Objectives:
The basics of MongoDB document model, query capabilities, and architecture required for analyzing when to use MongoDB?
Criteria for determining when to use MongoDB to re-platform legacy applications
Criteria for determining when to use MongoDB for new applications
MongoDB Days Silicon Valley: Winning the Dreamforce Hackathon with MongoDBMongoDB
Presented by Greg Deeds, CEO, Technology Exploration Group
Experience level: Introductory
A two person team using MongoDB and Salesforce.com created a geospatial machine learning tool from various datasets, parsing, indexing, and mapreduce in 24 hours. The amazing hack that beat 350 teams from around the world designer Greg Deeds will speak on getting to the winners circle with MongoDB power. It was MongoDB that proved to be the teams secret weapon to level the playing field for the win!
Reducing Development Time with MongoDB vs. SQLMongoDB
Buzz Moschetti compares the development time and effort required to save and fetch contact data using MongoDB versus SQL over the course of two weeks. With SQL, each time a new field is added or the data structure changes, the SQL schema must be altered and code updated in multiple places. With MongoDB, the data structure can evolve freely without changes to the data access code - it remains a simple insert and find. By day 14, representing the more complex data structure in SQL would require flattening some data and storing it in non-ideal ways, while MongoDB continues to require no changes to the simple data access code.
Conceptos básicos. Seminario web 5: Introducción a Aggregation FrameworkMongoDB
Este es el quinto seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. En este seminario web, se analizan los aspectos básicos de Aggregation Framework.
Restkit is an Objective-C framework that allows iOS and OSX applications to easily connect to RESTful web services. It provides object mapping between JSON/XML responses and Objective-C objects, and allows fetching, posting, and updating data from a REST API. Restkit handles mapping between network requests and responses to Core Data or plain objects.
Webinar: 10-Step Guide to Creating a Single View of your BusinessMongoDB
Organizations have long seen the value in aggregating data from multiple systems into a single, holistic, real-time representation of a business entity. That entity is often a customer. But the benefits of a single view in enhancing business visibility and operational intelligence can apply equally to other business contexts. Think products, supply chains, industrial machinery, cities, financial asset classes, and many more.
However, for many organizations, delivering a single view to the business has been elusive, impeded by a combination of technology and governance limitations.
MongoDB has been used in many single view projects across enterprises of all sizes and industries. In this session, we will share the best practices we have observed and institutionalized over the years. By attending the webinar, you will learn:
- A repeatable, 10-step methodology to successfully delivering a single view
- The required technology capabilities and tools to accelerate project delivery
- Case studies from customers who have built transformational single view applications on MongoDB.
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
Ms. Chitra Alavani, Head of the Computer Science department at Kaveri College of Arts, Science and Commerce, provides an overview of key concepts for working with MongoDB including creating databases and collections, inserting and querying documents, and performing aggregation.
Database Trends for Modern Applications: Why the Database You Choose Matters MongoDB
Matt Kalan, Senior Solutions Architect, MongoDB
Matt will explain how modern technology requirements have changed the requirements of the database. In order to handle agile development, big data, cloud, APIs, continuous availability, and unlimited scale while lowering costs, new capabilities are required. Do you need to tolerate the impedance mismatch between an object model and the relational model, or is there another way? We will walk through the application development process, to the code level, to compare using an RDBMS with MongoDB.
Beyond the Basics 2: Aggregation Framework MongoDB
The aggregation framework is one of the most powerful analytical tools available with MongoDB.
Learn how to create a pipeline of operations that can reshape and transform your data and apply a range of analytics functions and calculations to produce summary results across a data set.
MongoDB + Java - Everything you need to know Norberto Leite
Learn everything you need to know to get started building a MongoDB-based app in Java. We'll explore the relationship between MongoDB and various languages on the Java Virtual Machine such as Java, Scala, and Clojure. From there, we'll examine the popular frameworks and integration points between MongoDB and the JVM including Spring Data and object-document mappers like Morphia.
Back to Basics: My First MongoDB ApplicationMongoDB
- The document is a slide deck for a webinar on building a basic blogging application using MongoDB.
- It covers MongoDB concepts like documents, collections and indexes. It then demonstrates how to install MongoDB, connect to it using the mongo shell, and insert documents.
- The slide deck proceeds to model a basic blogging application using MongoDB, creating collections for users, articles and comments. It shows how to query, update, and import large amounts of seeded data.
MongoDB Europe 2016 - Graph Operations with MongoDBMongoDB
The popularity of dedicated graph technologies has risen greatly in recent years, at least partly fuelled by the explosion in social media and similar systems, where a friend network or recommendation engine is often a critical component when delivering a successful application. MongoDB 3.4 introduces a new Aggregation Framework graph operator, $graphLookup, to enable some of these types of use cases to be built easily on top of MongoDB. We will see how semantic relationships can be modelled inside MongoDB today, how the new $graphLookup operator can help simplify this in 3.4, and how $graphLookup can be used to leverage these relationships and build a commercially focused news article recommendation system.
Back to Basics Webinar 2: Your First MongoDB ApplicationMongoDB
The document provides instructions for installing and using MongoDB to build a simple blogging application. It demonstrates how to install MongoDB, connect to it using the mongo shell, insert and query sample data like users and blog posts, update documents to add comments, and more. The goal is to illustrate how to model and interact with data in MongoDB for a basic blogging use case.
MongoDB .local Chicago 2019: Practical Data Modeling for MongoDB: TutorialMongoDB
For 30 years, developers have been taught that relational data modeling was THE way to model, but as more companies adopt MongoDB as their data platform, the approaches that work well in relational design actually work against you in a document model design. In this talk, we will discuss how to conceptually approach modeling data with MongoDB, focusing on practical foundational techniques, paired with tips and tricks, and wrapping with discussing design patterns to solve common real world problems.
The document discusses Mongo-Hadoop integration and provides examples of using the Mongo-Hadoop connector to run MapReduce jobs on data stored in MongoDB. It covers loading and writing data to MongoDB from Hadoop, using Java MapReduce, Hadoop Streaming with Python, and analyzing data with Pig and Hive. Examples show processing an email corpus to build a graph of sender-recipient relationships and message counts.
Back to Basics Webinar 5: Introduction to the Aggregation FrameworkMongoDB
The document provides information about an upcoming webinar on the MongoDB aggregation framework. Key details include:
- The webinar will introduce the aggregation framework and provide an overview of its capabilities for analytics.
- Examples will use a real-world vehicle testing dataset to demonstrate aggregation pipeline stages like $match, $project, and $group.
- Attendees will learn how the aggregation framework provides a simpler way to perform analytics compared to other tools like Spark and Hadoop.
Back to Basics Webinar 1: Introduction to NoSQLMongoDB
This document provides an overview of an introduction to NoSQL webinar. It discusses why NoSQL databases were created, the different types of NoSQL databases including key-value stores, column stores, graph stores, multi-model databases and document stores. It provides details on MongoDB, describing how MongoDB stores data as JSON-like documents with dynamic schemas and supports features like indexing, aggregation and geospatial queries. The webinar agenda is also outlined.
Indexes are references to documents that are efficiently ordered by key and maintained in a tree structure for fast lookup. They improve the speed of document retrieval, range scanning, ordering, and other operations by enabling the use of the index instead of a collection scan. While indexes improve query performance, they can slow down document inserts and updates since the indexes also need to be maintained. The query optimizer aims to select the best index for each query but can sometimes be overridden.
Back to Basics Webinar 3: Schema Design Thinking in DocumentsMongoDB
This is the third webinar of a Back to Basics series that will introduce you to the MongoDB database. This webinar will explain the architecture of document databases.
Conceptos básicos. Seminario web 4: Indexación avanzada, índices de texto y g...MongoDB
Este es el cuarto seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. Este seminario se ve en la compatibilidad con índices de texto libre y geoespaciales.
Engineers often ask "how do I know if I should build my application on MongoDB?" IT executives ask a similar question, "which applications in my application portfolio should I migrate to MongoDB?" This presentation will present a framework for answering these questions.
We will cover two sets of criteria: (1) how to determine when to migrate a legacy application to MongoDB and (2) when should MongoDB be used for new applications? The presentation will also include a brief introduction to MongoDB to provide enough MongoDB technical background for analyzing when to use MongoDB?
Learning Objectives:
The basics of MongoDB document model, query capabilities, and architecture required for analyzing when to use MongoDB?
Criteria for determining when to use MongoDB to re-platform legacy applications
Criteria for determining when to use MongoDB for new applications
MongoDB Days Silicon Valley: Winning the Dreamforce Hackathon with MongoDBMongoDB
Presented by Greg Deeds, CEO, Technology Exploration Group
Experience level: Introductory
A two person team using MongoDB and Salesforce.com created a geospatial machine learning tool from various datasets, parsing, indexing, and mapreduce in 24 hours. The amazing hack that beat 350 teams from around the world designer Greg Deeds will speak on getting to the winners circle with MongoDB power. It was MongoDB that proved to be the teams secret weapon to level the playing field for the win!
Reducing Development Time with MongoDB vs. SQLMongoDB
Buzz Moschetti compares the development time and effort required to save and fetch contact data using MongoDB versus SQL over the course of two weeks. With SQL, each time a new field is added or the data structure changes, the SQL schema must be altered and code updated in multiple places. With MongoDB, the data structure can evolve freely without changes to the data access code - it remains a simple insert and find. By day 14, representing the more complex data structure in SQL would require flattening some data and storing it in non-ideal ways, while MongoDB continues to require no changes to the simple data access code.
Conceptos básicos. Seminario web 5: Introducción a Aggregation FrameworkMongoDB
Este es el quinto seminario web de la serie Conceptos básicos, en la que se realiza una introducción a la base de datos MongoDB. En este seminario web, se analizan los aspectos básicos de Aggregation Framework.
Restkit is an Objective-C framework that allows iOS and OSX applications to easily connect to RESTful web services. It provides object mapping between JSON/XML responses and Objective-C objects, and allows fetching, posting, and updating data from a REST API. Restkit handles mapping between network requests and responses to Core Data or plain objects.
Webinar: 10-Step Guide to Creating a Single View of your BusinessMongoDB
Organizations have long seen the value in aggregating data from multiple systems into a single, holistic, real-time representation of a business entity. That entity is often a customer. But the benefits of a single view in enhancing business visibility and operational intelligence can apply equally to other business contexts. Think products, supply chains, industrial machinery, cities, financial asset classes, and many more.
However, for many organizations, delivering a single view to the business has been elusive, impeded by a combination of technology and governance limitations.
MongoDB has been used in many single view projects across enterprises of all sizes and industries. In this session, we will share the best practices we have observed and institutionalized over the years. By attending the webinar, you will learn:
- A repeatable, 10-step methodology to successfully delivering a single view
- The required technology capabilities and tools to accelerate project delivery
- Case studies from customers who have built transformational single view applications on MongoDB.
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
How Auto Trader enables the UK's largest digital automotive marketplaceMongoDB
Often cited as one of the most successful digital transformations in the UK, Russell Warman talked through how their new ways of working, values and technology are helping Auto Trader to enable the UK's largest digital automotive marketplace and to become the UK’s most admired digital business.
Back to Basics 2017: Introduction to ShardingMongoDB
Sharding is a method for distributing data across multiple machines. MongoDB uses sharding to support deployments with very large data sets and high throughput operations by providing the capability for horizontal scaling.
Creating a Modern Data Architecture for Digital TransformationMongoDB
By managing Data in Motion, Data at Rest, and Data in Use differently, modern Information Management Solutions are enabling a whole range of architecture and design patterns that allow enterprises to fully harness the value in data flowing through their systems. In this session we explored some of the patterns (e.g. operational data lakes, CQRS, microservices and containerisation) that enable CIOs, CDOs and senior architects to tame the data challenge, and start to use data as a cross-enterprise asset.
Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver. In this session, the concepts behind microservices, containers and orchestration was explained and how to use them with MongoDB.
The importance of efficient data management for Digital TransformationMongoDB
Digital transformation involves profoundly transforming business activities, processes, competencies, and models to leverage changes from digital technologies strategically. It requires new capabilities and data management maturity. There are three areas of data management: data in motion which involves transferring data between systems; data at rest which refers to how data is stored; and data in use which is about extracting, transforming and analyzing data. A modern data platform uses cloud native technologies to manage data in real-time across all three areas at massive scales.
Back to Basics Webinar 3: Introduction to Replica SetsMongoDB
This document provides an introduction to MongoDB replica sets, which allow for data redundancy and high availability. It discusses how replica sets work, including the replica set life cycle and how applications should handle writes and queries when using a replica set. Specifically, it explains that the MongoDB driver is responsible for server discovery and monitoring, retry logic, and handling topology changes in a replica set to provide a consistent view of the data to applications.
MongoDB is an open-source, document-oriented database that provides high performance and horizontal scalability. It uses a document-model where data is organized in flexible, JSON-like documents rather than rigidly defined rows and tables. Documents can contain multiple types of nested objects and arrays. MongoDB is best suited for applications that need to store large amounts of unstructured or semi-structured data and benefit from horizontal scalability and high performance.
Seminario Web MongoDB-Paradigma: Cree aplicaciones más escalables utilizando ...MongoDB
Las arquitecturas de microservicios han sido adoptados muy rápidamente, debido a su capacidad para proveer modularidad, escalabilidad y alta disponibilidad
En este seminario web grabado, nuestros expertos, Rubén Terceño de MongoDB y Miguel Garrido de Paradigma Digital le explican cómo se puede usar microservicios para:
Alinear las estructuras de tu organización
Realizar aplicaciones más rápidamente
Hacer un uso eficiente de tus recursos
The document introduces MongoDB as a scalable, high-performance, open source, schema-free, document-oriented database. It discusses MongoDB's philosophy of flexibility and scalability over relational semantics. The main features covered are document storage, querying, indexing, replication, MapReduce and auto-sharding. Concepts like collections, documents and cursors are mapped to relational database terms. Examples uses include data warehousing and debugging.
The document discusses MongoDB's Aggregation Framework, which allows users to perform ad-hoc queries and reshape data in MongoDB. It describes the key components of the aggregation pipeline including $match, $project, $group, $sort operators. It provides examples of how to filter, reshape, and summarize document data using the aggregation framework. The document also covers usage and limitations of aggregation as well as how it can be used to enable more flexible data analysis and reporting compared to MapReduce.
View all the MongoDB World 2016 Poster Sessions slides in one place!
Table of Contents:
1: BigData DB Infrastructure for Modeling the Fly Brain
2: Taming the WiredTiger Cache
3: Sharding with MongoDB 3.2 Kick the tires and pop the hood!
4: Scaling Proactive Anomaly Detection
5: MongoTx: Transactions with Sharding and Queries
6: MongoDB: It’s Not Too Late To Shard
7: DLIFLC usage of MongoDB
MongoDB Analytics: Learn Aggregation by Example - Exploratory Analytics and V...MongoDB
This document discusses analyzing flight data using MongoDB aggregation. It provides examples of aggregation pipelines to group, match, project, sort, unwind and other stages. It explores questions about major carriers, airport cancellations, delays by distance and carrier. It also discusses visualizing route data and hub airports. Finally, it proposes a quiz on analyzing NYC flight data by importing data and performing queries on origins, cancellations, delays and weather impacts by month between the three major NYC airports.
This document outlines the topics covered in an Edureka course on MongoDB. The course contains 8 modules that cover MongoDB fundamentals, CRUD operations, schema design, administration, scaling, indexing and aggregation, application integration, and additional concepts and case studies. Each module contains multiple topics that will be taught through online instructor-led classes, recordings, quizzes, assignments, and support.
Back to Basics Webinar 1: Introduction to NoSQLMongoDB
This is the first webinar of a Back to Basics series that will introduce you to the MongoDB database, what it is, why you would use it, and what you would use it for.
MongoDB Days UK: Building an Enterprise Data Fabric at Royal Bank of Scotland...MongoDB
Presented by Michael Fulke, Development Team Lead, Royal Bank of Scotland
Experience level: Beginner
When addressing common investment banking use-cases, incumbent application architectures have proven themselves to be complex, difficult to maintain and expensive. Driven by the apparently competing pressures of cost and agility, RBS used MongoDB to build a common enterprise data fabric which is underpinning several core trading platforms. In this session, you will learn how RBS has successfully integrated MongoDB into a wider Java-based architecture, built with a strong open source bias.
Using MongoDB as a high performance graph databaseChris Clarke
This document provides a summary of a presentation on using MongoDB as a high performance graph database. The presentation discusses:
1) The speaker's company Talis Education using MongoDB for 8 months to store and query graph data, and implementing a software wrapper on top of MongoDB to improve scalability and performance.
2) Common graph data models using nodes and edges or resources and properties. Basic graph concepts like undirected vs directed graphs are explained.
3) Storing graph data as RDF triples, with the standard subject-predicate-object structure. Benefits of RDF such as reusing common schemas and easy data merging are highlighted.
4) SPARQL, the query language for RDF,
The document discusses a talk given by Mitch Pirtle about using PHP, Lithium, and MongoDB together. It provides background on Mitch and what he will cover, including an introduction to MongoDB and Lithium. Some key benefits of using Lithium with MongoDB are that Lithium does not force relational practices and encourages using MongoDB's flexible document model. The talk concludes with providing some resources for learning more about Lithium and PHP filters.
The document provides an overview of a webinar on transitioning from SQL to MongoDB. It introduces the presenter Buzz Moschetti and his background. It then discusses how developers currently spend their time integrating with different components and systems like databases, and how the mismatch between data at the business level versus the database level has been a long-standing problem. The document uses examples to show how MongoDB can help by allowing richer data structures and a more direct match between data in code and the database.
Rapid Development and Performance By Transitioning from RDBMSs to MongoDB
Modern day application requirements demand rich & dynamic data structures, fast response times, easy scaling, and low TCO to match the rapidly changing customer & business requirements plus the powerful programming languages used in today's software landscape.
Traditional approaches to solutions development with RDBMSs increasingly expose the gap between the modern development languages and the relational data model, and between scaling up vs. scaling horizontally on commodity hardware. Development time is wasted as the bulk of the work has shifted from adding business features to struggling with the RDBMSs.
MongoDB, the premier NoSQL database, offers a flexible and scalable solution to focus on quickly adding business value again.
In this session, we will provide:
- Overview of MongoDB's capabilities
- Code-level exploration of the MongoDB programming model and APIs and how they transform the way developers interact with a database
- Update of the exciting features in MongoDB 3.0
Data Modeling, Normalization, and De-Normalization | PostgresOpen 2019 | Dimi...Citus Data
As a developer using PostgreSQL one of the most important tasks you have to deal with is modeling the database schema for your application. In order to achieve a solid design, it’s important to understand how the schema is then going to be used as well as the trade-offs it involves.
As Fred Brooks said: “Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.”
In this talk we're going to see practical normalisation examples and their benefits, and also review some anti-patterns and their typical PostgreSQL solutions, including Denormalization techniques thanks to advanced Data Types.
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
Simon Elliston Ball – When to NoSQL and When to Know SQL - NoSQL matters Barc...NoSQLmatters
Simon Elliston Ball – When to NoSQL and When to Know SQL
With NoSQL, NewSQL and plain old SQL, there are so many tools around it’s not always clear which is the right one for the job.This is a look at a series of NoSQL technologies, comparing them against traditional SQL technology. I’ll compare real use cases and show how they are solved with both NoSQL options, and traditional SQL servers, and then see who wins. We’ll look at some code and architecture examples that fit a variety of NoSQL techniques, and some where SQL is a better answer. We’ll see some big data problems, little data problems, and a bunch of new and old database technologies to find whatever it takes to solve the problem.By the end you’ll hopefully know more NoSQL, and maybe even have a few new tricks with SQL, and what’s more how to choose the right tool for the job.
SQL tricks and cheats, or the obvious nearby. This document discusses various SQL tricks and tips including:
1. Different ways to generate unique IDs for table rows such as UUIDs, sequences, and auto-increment fields.
2. Tips for database design such as using ER diagrams and normalizing data.
3. Examples of DDL tricks like defining foreign keys and column data types.
4. DML tricks including selecting data, concating columns, and handling null values.
The document provides solutions and examples for common SQL problems through various tricks and features to help optimize database and query design. It aims to remind readers of useful SQL features and show flexibility in solutions.
This document summarizes the new features of JPA 2.0 as defined in JSR 317. Key additions include expanded object-relational mappings, improved domain modeling capabilities, enhancements to the Java Persistence query language including a Criteria API, standardization of configuration hints, support for validation via JSR 303, and second level caching.
Arm yourself with Domain Driven Security. It's time to slay some security trollsOmegapoint Academy
We all know we have people like Anonymous, LulzSec, and NSA around. With this in mind, shouldn't we start thinking about the security of our systems? Well, of course. But, could you turn your knowledge of DDD into an advantage for understanding and counteracting security vulnerabilities? Yes, you could. This session is about exactly that. "Business" and "technical" attacks are two kinds of attacks, where the latter is the most famous, e.g. SQL Injection and Cross-Site Scripting. But this doesn't mean business attacks are less harmful. On the contrary, attacks on the business tend to be extremely sophisticated and powerful as they often leave the infrastructure intact and trigger no alarms. Domain Driven Security is the field that counteracts both types of attacks by using tools and mindsets from DDD in a clever way.
This document provides an overview of building a MEHN stack application from scratch. It discusses MongoDB concepts like CRUD operations, schemas, and connecting MongoDB to an application. It also covers using Express and Handlebars for the application framework. The presenter will demonstrate a full CRUD application in MongoDB with Express and Handlebars over the next two days, starting from initializing the project and building out functionality.
The document discusses NoSQL databases and when they may be preferable to traditional SQL databases. It notes that NoSQL databases are non-relational, support horizontal scaling, and allow for new data models that can improve application development flexibility. Examples showing data structures and queries in MongoDB are provided. The document also summarizes pros and cons of different database types from 2000-2010.
The document discusses NoSQL databases and when they may be preferable to traditional SQL databases. It notes that NoSQL databases are non-relational, support horizontal scaling, and allow for new data models that can improve application development flexibility. Examples showing data structures and queries in MongoDB are provided. Reasons for different database preferences depending on needs like transactions, data structure, performance, and agility are also summarized.
This document discusses CRUD operations in MongoDB including insert, read, update, and delete. It also covers concepts like object IDs, references between documents, sorting, security using authentication, and scaling MongoDB by adding resources or sharding data across multiple servers.
This document discusses MongoDB and the needs of Rivera Group, an IT services company. It notes that Rivera Group has been using MongoDB since 2012 to store large, multi-dimensional datasets with heavy read/write and audit requirements. The document outlines some of the challenges Rivera Group faces around indexing, aggregation, and flexibility in querying datasets.
Eagle6 is a product that use system artifacts to create a replica model that represents a near real-time view of system architecture. Eagle6 was built to collect system data (log files, application source code, etc.) and to link system behaviors in such a way that the user is able to quickly identify risks associated with unknown or unwanted behavioral events that may result in unknown impacts to seemingly unrelated down-stream systems. This session is designed to present the capabilities of the Eagle6 modeling product and how we are using MongoDB to support near-real-time analysis of large disparate datasets.
MongoDB, PHP and the cloud - php cloud summit 2011Steven Francia
An introduction to using MongoDB with PHP.
Walking through the basics of schema design, connecting to a DB, performing CRUD operations and queries in PHP.
MongoDB runs great in the cloud, but there are some things you should know. In this session we'll explore scaling and performance characteristics of running Mongo in the cloud as well as best practices for running on platforms like Amazon EC2.
This document provides instructions for installing MongoDB on Windows and CentOS. It outlines 5 steps for installing on Windows which include downloading MongoDB, creating a data folder, extracting the download package, connecting to MongoDB using mongo.exe, and testing with sample data. It also outlines 5 steps for installing on CentOS that mirror the Windows steps. The document then discusses additional MongoDB concepts like connecting to databases, creating collections and inserting documents, using cursors, querying for specific documents, and core CRUD operations.
Big Data: Guidelines and Examples for the Enterprise Decision MakerMongoDB
This document provides an overview of a real-time directed content system that uses MongoDB, Hadoop, and MapReduce. It describes:
- The key participants in the system and their roles in generating, analyzing, and operating on data
- An architecture that uses MongoDB for real-time user profiling and content recommendations, Hadoop for periodic analytics on user profiles and content tags, and MapReduce jobs to update the profiles
- How the system works over time to continuously update user profiles based on their interactions with content, rerun analytics daily to update tags and baselines, and make recommendations based on the updated profiles
- How the system supports both real-time and long-term analytics needs through this integrated approach.
The document discusses techniques for evolutionary database development in an agile team. It recommends that the database administrator (DBA) work closely with other roles to iteratively refactor the database schema through small, frequent changes. It also emphasizes automated testing and deployment of database changes to safely evolve the database design over time.
This document provides C# code examples for connecting to and interacting with a SQL Server database using the Connect for .NET SQL Server data provider. It includes examples of retrieving data using a DataReader, executing local and distributed transactions, using a CommandBuilder to generate SQL statements, and more. Sample tables (emp and dept) are provided that can be created using SQL scripts or the data provider to support the code examples.
The document proposes an IT infrastructure for Shiv LLC, a company with locations in Los Angeles, Dallas, and Houston. It recommends implementing an Active Directory domain to enable communication and file sharing across the three locations. A centralized file server would store common files and applications. Each location would have its own local area network, connected to the other sites and to the internet via VPN. Firewalls, antivirus software, and regular backups would help secure the network and protect company data. The design allows for future growth and expansion as the company scales up.
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
This presentation discusses migrating data from other data stores to MongoDB Atlas. It begins by explaining why MongoDB and Atlas are good choices for data management. Several preparation steps are covered, including sizing the target Atlas cluster, increasing the source oplog, and testing connectivity. Live migration, mongomirror, and dump/restore options are presented for migrating between replicasets or sharded clusters. Post-migration steps like monitoring and backups are also discussed. Finally, migrating from other data stores like AWS DocumentDB, Azure CosmosDB, DynamoDB, and relational databases are briefly covered.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Telangana State, India’s newest state that was carved from the erstwhile state of Andhra
Pradesh in 2014 has launched the Water Grid Scheme named as ‘Mission Bhagiratha (MB)’
to seek a permanent and sustainable solution to the drinking water problem in the state. MB is
designed to provide potable drinking water to every household in their premises through
piped water supply (PWS) by 2018. The vision of the project is to ensure safe and sustainable
piped drinking water supply from surface water sources
computer organization and assembly language : its about types of programming language along with variable and array description..https://www.nfciet.edu.pk/
Thingyan is now a global treasure! See how people around the world are search...Pixellion
We explored how the world searches for 'Thingyan' and 'သင်္ကြန်' and this year, it’s extra special. Thingyan is now officially recognized as a World Intangible Cultural Heritage by UNESCO! Dive into the trends and celebrate with us!
This comprehensive Data Science course is designed to equip learners with the essential skills and knowledge required to analyze, interpret, and visualize complex data. Covering both theoretical concepts and practical applications, the course introduces tools and techniques used in the data science field, such as Python programming, data wrangling, statistical analysis, machine learning, and data visualization.
3. Before We Begin
• This webinar is being recorded
• Use The Chat Window for
• Technical assistance
• Q&A
• MongoDB Team will answer quick questions in real time
• “Common” questions will be reviewed at the end of the
webinar
4. Who is your Presenter?
• Programmer
• Developer Manager
• Entrepreneur
• Geek
• Some time pre-sales guy
5. MongoDB: The New Default Database
Document
Data Model
Open-
Source
Fully Featured
High Performance
Scalable
{
name: “John Smith”,
pfxs: [“Dr.”,”Mr.”],
address: “10 3rd St.”,
phone: {
home: 1234567890,
mobile: 1234568138 }
}
7. Functions of a Database
• Durable data storage
• Structural representation of data
• CRUD operations
• Authentication and authorization
• Programmer Efficiency?
8. What Are Your Developers Doing All Day?
1964 - IMS
1977 - Oracle
1984 - dBASE
1991 - MySQL
2009 - MongoDB
9. The Challenge is Product Development
1976 2016
Business Data
Goals
Process Payroll Monthly Process real-time billing to the minute for 1m
customers
Release
Schedule
Semi-Annually Monthly
Application/Code COBOL, Fortran, Algol, PL/1,
assembler, proprietary tools
Python, Java, Node.js, Ruby, PHP, Perl, Scala,
Erlang and the rest
Tools None Apache, LAMP, Mean, Eclipse, Intellij,
Sourceforge etc.
Database I/VSAM, early RDBMS RDBMS, NoSQL
10. Rectangles are 1976. Maps and Lists are 2016
{ customer_id : 1,
first_name : "Mark",
last_name : "Smith",
city : "San Francisco",
phones: [ {
type : “work”,
number: “1-800-555-1212”
},
{ type : “home”,
number: “1-800-555-1313”,
DNC: true
},
{ type : “home”,
number: “1-800-555-1414”,
DNC: true
}
]
}
11. AnActual Code Example
Let’s compare and contrast RDBMS/SQL to MongoDB
development using Java over the course of a few weeks.
Some ground rules:
1. Observe rules of Software Engineering 101: Assume separation of application,
Data Access Layer, and database implementation
2. Data Access Layer must be able to
a. Expose simple, functional, data-only interfaces to the application
• No ORM, frameworks, compile-time bindings, special tools
b. Exploit high performance features of database
3. Focus on core data handling code and avoid distractions that require the same
amount of work in both technologies
a. No exception or error handling
b. Leave out DB connection and other setup resources
4. Day counts are a proxy for progress, not actual time to complete indicated task
5. Don’t expect to cut and paste this code
12. The Task: Saving and Fetching Contact data
Map m = new HashMap();
m.put(“name”, “Joe D”);
m.put(“id”, “K1”);
Start with this simple,
flat shape in the Data
Access Layer:
id = save(Map m)
And assume we
save it in this way:
Map m = fetch(String id)
And assume we
fetch one by primary
key in this way:
Brace yourself…..
13. MongoDBSQL
DDL: create table contact ( … )
init()
{
contactInsertStmt = connection.prepareStatement
(“insert into contact ( id, name ) values ( ?,? )”);
fetchStmt = connection.prepareStatement
(“select id, name from contact where id = ?”);
}
save(Map m)
{
contactInsertStmt.setString(1, m.get(“id”));
contactInsertStmt.setString(2, m.get(“name”));
contactInsertStmt.execute();
}
Map fetch(String id)
{
Map m = null;
fetchStmt.setString(1, id);
rs = fetchStmt.execute();
if(rs.next()) {
m = new HashMap();
m.put(“id”, rs.getString(1));
m.put(“name”, rs.getString(2));
}
return m;
}
Day 1: Initial efforts for both technologies
DDL: none
Map fetch(String id)
{
Map m = null;
c = collection.find(eq( “id”, id ));
if( c.hasNext())
m = (Map) c.next();
}
return m;
}
save( Map m )
{
collection.insert(Document( m ));
}
{“name” : ”Joe D”,
“id” : ”K1” }
14. Day 2: Add simple fields
m.put(“name”, “Joe D”);
m.put(“id”, “K1”);
m.put(“title”, “Mr.”);
m.put(“hireDate”, new Date(2011, 11, 1));
• Capturing title and hireDate is part of adding a new
business feature
• It was pretty easy to add two fields to the structure
• …but now we have to change our persistence code
15. SQL Day 2 (changes in bold)
DDL: alter table contact add title varchar(8);
alter table contact add hireDate date;
init()
{
contactInsertStmt = connection.prepareStatement
(“insert into contact ( id, name, title, hiredate ) values
( ?,?,?,? )”);
fetchStmt = connection.prepareStatement
(“select id, name, title, hiredate from contact where id =
?”);
}
save(Map m)
{
contactInsertStmt.setString(1, m.get(“id”));
contactInsertStmt.setString(2, m.get(“name”));
contactInsertStmt.setString(3, m.get(“title”));
contactInsertStmt.setDate(4, m.get(“hireDate”));
contactInsertStmt.execute();
}
Map fetch(String id)
{
Map m = null;
fetchStmt.setString(1, id);
rs = fetchStmt.execute();
if(rs.next()) {
m = new HashMap();
m.put(“id”, rs.getString(1));
m.put(“name”, rs.getString(2));
m.put(“title”, rs.getString(3));
m.put(“hireDate”, rs.getDate(4));
}
return m;
}
Consequences:
1. Code release schedule linked
to database upgrade (new
code cannot run on old
schema)
2. Issues with case sensitivity
starting to creep in (many
RDBMS are case insensitive
for column names, but code is
case sensitive)
3. Changes require careful mods
in 4 places
4. Beginning of technical debt
16. MongoDB Day 2
save( Map m )
{
collection.insert(Document( m ));
}
Map fetch(String id)
{
Map m = null;
c = collection.find(eq( “id”, id ));
if( c.hasNext())
m = (Map) c.next();
}
return m;
}
Advantages:
1. Zero time and money spent on
overhead code
2. Code and database not physically
linked
3. New material with more fields can
be added into existing collections;
backfill is optional
4. Names of fields in database
precisely match key names in
code layer and directly match on
name, not indirectly via positional
offset
5. No technical debt is created
✔ NO
CHANGE
17. Day 3: Add list of phone numbers
m.put(“name”, “Joe D”);
m.put(“id”, “K1”);
m.put(“title”, “Mr.”);
m.put(“hireDate”, new Date(2011, 11, 1));
n1.put(“type”, “work”);
n1.put(“number”, “1-800-555-1212”));
list.add(n1);
n2.put(“type”, “home”));
n2.put(“number”, “1-866-444-3131”));
list.add(n2);
m.put(“phones”, list);
• It was still pretty easy to add this data to the structure
• .. but meanwhile, in the persistence code …
REALLY brace yourself…
18. SQL Day 3 changes: Option 1: Assume just 1
work and 1 home phone number
DDL: alter table contact add work_phone varchar(16);
alter table contact add home_phone varchar(16);
init()
{
contactInsertStmt = connection.prepareStatement
(“insert into contact ( id, name, title, hiredate,
work_phone, home_phone ) values ( ?,?,?,?,?,? )”);
fetchStmt = connection.prepareStatement
(“select id, name, title, hiredate, work_phone,
home_phone from contact where id = ?”);
}
save(Map m)
{
contactInsertStmt.setString(1, m.get(“id”));
contactInsertStmt.setString(2, m.get(“name”));
contactInsertStmt.setString(3, m.get(“title”));
contactInsertStmt.setDate(4, m.get(“hireDate”));
for(Map onePhone : m.get(“phones”)) {
String t = onePhone.get(“type”);
String n = onePhone.get(“number”);
if(t.equals(“work”)) {
contactInsertStmt.setString(5, n);
} else if(t.equals(“home”)) {
contactInsertStmt.setString(6, n);
}
}
contactInsertStmt.execute();
}
Map fetch(String id)
{
Map m = null;
fetchStmt.setString(1, id);
rs = fetchStmt.execute();
if(rs.next()) {
m = new HashMap();
m.put(“id”, rs.getString(1));
m.put(“name”, rs.getString(2));
m.put(“title”, rs.getString(3));
m.put(“hireDate”, rs.getDate(4));
Map onePhone;
onePhone = new HashMap();
onePhone.put(“type”, “work”);
onePhone.put(“number”, rs.getString(5));
list.add(onePhone);
onePhone = new HashMap();
onePhone.put(“type”, “home”);
onePhone.put(“number”, rs.getString(6));
list.add(onePhone);
m.put(“phones”, list);
}
This is just plain bad….
19. SQL Day 3 changes: Option 2:
Proper approach with multiple phone numbers
DDL: create table phones ( … )
init()
{
contactInsertStmt = connection.prepareStatement
(“insert into contact ( id, name, title, hiredate )
values ( ?,?,?,? )”);
c2stmt = connection.prepareStatement(“insert into
phones (id, type, number) values (?, ?, ?)”;
fetchStmt = connection.prepareStatement
(“select id, name, title, hiredate, type, number from
contact, phones where phones.id = contact.id and
contact.id = ?”);
}
save(Map m)
{
startTrans();
contactInsertStmt.setString(1, m.get(“id”));
contactInsertStmt.setString(2, m.get(“name”));
contactInsertStmt.setString(3, m.get(“title”));
contactInsertStmt.setDate(4, m.get(“hireDate”));
for(Map onePhone : m.get(“phones”)) {
c2stmt.setString(1, m.get(“id”));
c2stmt.setString(2, onePhone.get(“type”));
c2stmt.setString(3, onePhone.get(“number”));
c2stmt.execute();
}
contactInsertStmt.execute();
endTrans();
}
Map fetch(String id)
{
Map m = null;
fetchStmt.setString(1, id);
rs = fetchStmt.execute();
int i = 0;
List list = new ArrayList();
while (rs.next()) {
if(i == 0) {
m = new HashMap();
m.put(“id”, rs.getString(1));
m.put(“name”, rs.getString(2));
m.put(“title”, rs.getString(3));
m.put(“hireDate”, rs.getDate(4));
m.put(“phones”, list);
}
Map onePhone = new HashMap();
onePhone.put(“type”, rs.getString(5));
onePhone.put(“number”, rs.getString(6));
list.add(onePhone);
i++;
}
return m;
}
This took time and money
20. SQL Day 5: Zero or More Entries
init()
{
contactInsertStmt = connection.prepareStatement
(“insert into contact ( id, name, title, hiredate )
values ( ?,?,?,? )”);
c2stmt = connection.prepareStatement(“insert into
phones (id, type, number) values (?, ?, ?)”;
fetchStmt = connection.prepareStatement
(“select A.id, A.name, A.title, A.hiredate, B.type,
B.number from contact A left outer join phones B on
(A.id = B. id) where A.id = ?”);
}
Whoops! And it’s also wrong!
We did not design the query accounting
for contacts that have no phone number.
Thus, we have to change the join to an
outer join.
But this ALSO means we have to change
the unwind logic
This took more time and
money!
while (rs.next()) {
if(i == 0) {
// …
}
String s = rs.getString(5);
if(s != null) {
Map onePhone = new HashMap();
onePhone.put(“type”, s);
onePhone.put(“number”, rs.getString(6));
list.add(onePhone);
}
}
…but at least we have a DAL…
right?
21. MongoDB Day 3
Advantages:
1. Zero time and money spent on
overhead code
2. No need to fear fields that are
“naturally occurring” lists
containing data specific to the
parent structure and thus do not
benefit from normalization and
referential integrity
3. Safe from “Zero or More” entities
save( Map m )
{
collection.insert(Document( m ));
}
Map fetch(String id)
{
Map m = null;
c = collection.find(eq( “id”, id ));
if( c.hasNext())
m = (Map) c.next();
}
return m;
}
✔ NO
CHANGE
22. By Day 14, our structure looks like this:
n4.put(“geo”, “US-EAST”);
n4.put(“startupApps”, new String[] { “app1”, “app2”, “app3” } );
list2.add(n4);
n4.put(“geo”, “EMEA”);
n4.put(“startupApps”, new String[] { “app6” } );
n4.put(“useLocalNumberFormats”, false):
list2.add(n4);
m.put(“preferences”, list2)
n6.put(“optOut”, true);
n6.put(“assertDate”, someDate);
seclist.add(n6);
m.put(“attestations”, seclist)
m.put(“security”, mapOfDataCreatedByExternalSource);
23. SQL Day 14
Error: Could not fit all the code into this space.
But very likely, among other things:
• n4.put(“startupApps”,new
String[]{“app1”,“app2”,“app3”});
was implemented as a single semi-colon delimited string or we
had to create another table and change the DAL
• m.put(“security”, anotherMapOfData);
was implemented by flattening it out and storing a subset of
fields or as a blob
24. MongoDB Day 14 – and every other day
Advantages:
1. Zero time and money spent on
overhead code
2. Persistence is so easy and flexible
and backward compatible that the
persistor does not upward-
influence the shapes we want to
persist i.e. the tail does not wag
the dog
save( Map m )
{
collection.insert(Document( m ));
}
Map fetch(String id)
{
Map m = null;
c = collection.find(eq( “id”, id ));
if( c.hasNext())
m = (Map) c.next();
}
return m;
}
✔ NO
CHANGE
25. But what if we must do a join?
Both RDBMS and MongoDB will have a PhoneTransactions table/collection
{ customer_id : 1,
first_name : "Mark",
last_name : "Smith",
city : "San Francisco",
phones: [ {
type : “work”,
number: “1-800-555-1212”
},
{ type : “home”,
number: “1-800-555-1313”,
DNC: true
},
{ type : “home”,
number: “1-800-555-1414”,
DNC: true
}
]
}
{ number: “1-800-555-1212”,
target: “1-999-238-3423”,
duration: 20
}
{ number: “1-800-555-1212”,
target: “1-444-785-6611”,
duration: 243
}
{ number: “1-800-555-1414”,
target: “1-645-331-4345”,
duration: 132
}
{ number: “1-800-555-1414”,
target: “1-990-875-2134”,
duration: 71
}
PhoneTransactions
26. SQL JoinAttempt #1
select A.id, A.lname, B.type, B.number, C.target, C.duration
from contact A, phones B, phonestx C
where A.id = B.id and B.number = C.number
id | lname | type | number | target | duration
-----+--------------+------+----------------+----------------+----------
g9 | Moschetti | home | 1-900-555-1212 | 1-222-707-7070 | 23
g10 | Kalan | work | 1-999-444-9999 | 1-222-907-7071 | 7
g9 | Moschetti | work | 1-800-989-2231 | 1-987-707-7072 | 9
g9 | Moschetti | home | 1-900-555-1212 | 1-222-707-7071 | 7
g9 | Moschetti | home | 1-777-999-1212 | 1-222-807-7070 | 23
g9 | Moschetti | home | 1-777-999-1212 | 1-222-807-7071 | 7
g10 | Kalan | work | 1-999-444-9999 | 1-222-907-7070 | 23
g9 | Moschetti | home | 1-900-555-1212 | 1-222-707-7072 | 9
g10 | Kalan | work | 1-999-444-9999 | 1-222-907-7072 | 9
g9 | Moschetti | home | 1-777-999-1212 | 1-222-807-7072 | 9
How to turn this into a list of names –
each with a list of numbers, each of those with a list of target
numbers?
27. SQL UnwindAttempt #1
Map idmap = new HashMap();
ResultSet rs = fetchStmt.execute();
while (rs.next()) {
String id = rs.getString(“id");
String nmbr = rs.getString("number");
List tnum;
Map snum;
if((snum = (List) idmap.get(id)) == null) {
snum = new HashMap();
idmap.put(did, snum);
}
if((tnum = snum.get(nmbr)) == null) {
tnum = new ArrayList();
snum.put(number, tnum);
}
Map info = new HashMap();
info.put("target", rs.getString("target"));
info.put("duration", rs.getInteger("duration"));
tnum.add(info);
}
// idmap[“g9”][“1-900-555-1212”] = ({target:1-222-707-7070,duration:23…)
28. SQL JoinAttempt #2
select A.id, A.lname, B.type, B.number, C.target, C.duration
from contact A, phones B, phonestx C
where A.id = B.id and B.number = C.number order by A.id, B.number
id | lname | type | number | target | duration
-----+--------------+------+----------------+----------------+----------
g10 | Kalan | work | 1-999-444-9999 | 1-222-907-7072 | 9
g10 | Kalan | work | 1-999-444-9999 | 1-222-907-7070 | 23
g10 | Kalan | work | 1-999-444-9999 | 1-222-907-7071 | 7
g9 | Moschetti | home | 1-777-999-1212 | 1-222-807-7072 | 9
g9 | Moschetti | home | 1-777-999-1212 | 1-222-807-7070 | 23
g9 | Moschetti | home | 1-777-999-1212 | 1-222-807-7071 | 7
g9 | Moschetti | work | 1-800-989-2231 | 1-987-707-7072 | 9
g9 | Moschetti | home | 1-900-555-1212 | 1-222-707-7071 | 7
g9 | Moschetti | home | 1-900-555-1212 | 1-222-707-7072 | 9
g9 | Moschetti | home | 1-900-555-1212 | 1-222-707-7070 | 23
“Early bail out” from cursor is now possible –
but logic to construct list of source and target numbers is similar
29. SQL is about Disassembly
String s = “select A, B, C, D, E, F from T1,T2,T3
where T1.col = T2.col and T2.col2 = T3.col2 and X =
Y and X2 != Y2 and G > 10 and G < 100 and TO_DATE(‘
…”;
ResultSet rs = execute(s);
while(ResultSet.next()) {
if(new column1 value from T1) {
set up new Object;
}
if(new column2 value from T2) {
set up new Object2
}
if(new column3 value from T3) {
set up new Object3
}
populate maps, lists and scalars
}
Design a Big Query
including business logic
to grab all the data up
front
Throw it at the engine
Disassemble Big
Rectangle into usable
objects with logic implicit
in change in column
values
30. MongoDB is aboutAssembly
Cursor c = coll1.find({“X”:”Y”});
while(c.hasNext()) {
populate maps, lists and scalars;
Cursor c2 = coll2.find(logic+key from c);
while(c2.hasNext()) {
populate maps, lists and scalars;
Cursor c3 = coll3.find(logic+key from c2);
while(c3.hasNext()) {
populate maps, lists and scalars;
}
}
DIY:
OR assemble usable objects incrementally with
explicit calls to $lookup and $graphLookup
32. But what about “real” queries?
• MongoDB query language is a physical map-of-
map based structure, not a String
• Operators (e.g. AND, OR, GT, EQ, etc.) and arguments are
keys and values in a cascade of Maps
• No grammar to parse, no templates to fill in, no whitespace,
no escaping quotes, no parentheses, no punctuation
• Same paradigm to manipulate data is used to
manipulate query expressions
• …which is also, by the way, the same paradigm
for working with MongoDB metadata and
explain()
33. 33
Mongo Shell
JD10Gen:mugalyser jdrumgoole$ mongo
MongoDB shell version: 3.2.7
connecting to: test
MongoDB Enterprise > use MUGS
switched to db MUGS
MongoDB Enterprise > show collections
attendees
audit
groups
members
past_events
upcoming_events
MongoDB Enterprise >
MongoDB Enterprise > db.members.find( { "batchID" : 108, "member.member_name" : "Joe Drumgoole" }).pretty()
{
"_id" : ObjectId("58511bfbb26a8803b6b4d56c"),
"member" : {
"city" : "Dublin",
"events_attended" : 13,
"last_access_time" : ISODate("2016-12-13T15:45:27Z"),
"country" : "Ireland",
"member_id" : 99473492,
"chapters" : [
{
"urlname" : "London-MongoDB-User-Group",
"name" : "London MongoDB User Group",
"id" : 1775736
…
34. MongoDB Query Examples
SQL CLI select * from contact A, phones B where
A.did = B.did and B.type = 'work’;
MongoDB CLI db.contact.find({"phones.type”:”work”});
SQL in Java String s = “select * from contact A, phones B where
A.did = B.did and B.type = 'work’”;
ResultSet rs = execute(s);
MongoDB via
Java driver
Cursor c = contact.find(eq( “phones.type”, “work” ));
Find all contacts with at least one work phone
35. MongoDB Query Examples
SQL select A.did, A.lname, A.hiredate, B.type, B.number from contact A
left outer join phones B on (B.did = A.did) where b.type = 'work' or
A.hiredate > '2014-02-02'::date
Java db.contacts.find( or( eq( “phones.type’ : ”work” ),
gt( “hiredate”, Date( 2014, 2, 2 ))
CLI db.contacts.find(
{ $or : [ { “phones.type” : “work” },
{ “hiredate” : new Date("2014-02-02T00:00:00.000Z")]})
Find all contacts with at least one work phone or
hired after 2014-02-02
36. …and before you ask…
Yes, MongoDB query expressions
support
1. Sorting
2. Cursor size limit
3. Projection (asking for only parts of the rich
shape to be returned)
4. Aggregation (“GROUP BY”) functions
45. Aggregation Pipeline Stages
• $match
Filter documents
• $geoNear
Geospherical query
• $project
Reshape documents
• $lookup
Left-outer equi joins
• $unwind
Expand documents
• $group
Summarize documents
• $sample
Randomly selects a subset of
documents
• $sort
Order documents
• $skip
Jump over a number of documents
• $limit
Limit number of documents
• $redact
Restrict documents
• $out
Sends results to a new collection
46. The Fundamental Change with mongoDB
RDBMS designed in era when:
• CPU and disk was slow &
expensive
• Memory was VERY expensive
• Network? What network?
• Languages had limited means to
dynamically reflect on their types
• Languages had poor support for
richly structured types
Thus, the database had to
• Act as combiner-coordinator of
simpler types
• Define a rigid schema
• (Together with the code) optimize
at compile-time, not run-time
In mongoDB, the
data is the schema!
47. MongoDB and the Rich Map Ecosystem
Generic comparison of two
records
Map expr = new HashMap();
expr.put("myKey", "K1");
DBObject a = collection.findOne(expr);
expr.put("myKey", "K2");
DBObject b = collection.findOne(expr);
List<MapDiff.Difference> d = MapDiff.diff((Map)a, (Map)b);
Getting default values for a thing
on a certain date and then
overlaying user preferences (like
for a calculation run)
Map expr = new HashMap();
expr.put("myKey", "DEFAULT");
expr.put("createDate", new Date(2013, 11, 1));
DBObject a = collection.findOne(expr);
expr.clear();
expr.put("myKey", "user1");
DBObject b = otherCollectionPerhaps.findOne(expr);
MapStack s = new MapStack();
s.push((Map)a);
s.push((Map)b);
Map merged = s.project();
Runtime reflection of Maps and Lists enables generic powerful utilities
(MapDiff, MapStack) to be created once and used for all kinds of shapes,
saving time and money
48. Lastly: ACLI with teeth
> db.contact.find({"SeqNum": {"$gt”:10000}}).explain();
{
"cursor" : "BasicCursor",
"n" : 200000,
//...
"millis" : 223
}
Try a query and show the
diagnostics
> for(v=[],i=0;i<3;i++) {
… n = i*50000;
… expr = {"SeqNum": {"$gt”: n}};
… v.push( [n, db.contact.find(expr).explain().millis)] }
Run it 3 times with smaller and
smaller chunks and create a
vector of timing result pairs
(size,time)
> v
[ [ 0, 225 ], [ 50000, 222 ], [ 100000, 220 ] ]
Let’s see that vector
> load(“jStat.js”)
> jStat.stdev(v.map(function(p){return p[1];}))
2.0548046676563256
Use any other javascript you
want inside the shell
> for(i=0;i<3;i++) {
… expr = {"SeqNum": {"$gt":i*1000}};
… db.foo.insert(db.contact.find(expr).explain()); }
Party trick: save the explain()
output back into a collection!
49. And There is More – Compass and Atlas
Compass
Atlas
50. 50
What Does This Add Up To?
Relational
NoSQL
Expressive Query
Language
Strong
Consistency
Secondary Indexes
Flexibility
Scalability
Performance
Editor's Notes
#8: Relational really helped programmer efficency by abstracting software away from hardware.
#9: Bad craziness with object orietned databases in the 90’s and XML databases in the early 00’s.
The fight to abstract away from the nuts and bolts.
Only one of these databases is not being sold anymore. But I bet someone out there is still using dBASE. My first paying job was data entry into a dBASE database.
#10: …but for the past 40 years, innovation at the business & application layers has outpaced innovation at the database layer
#13: You can solve any problem in computing with a Hashmap or a combination of Hashmaps. Remember all problems in computing can be solved by adding another layer of indirection.
#14: We don’t need a prepared statement for the query
#23: It was still pretty easy to add this data to the structure
Want to guess what the SQL persistence code looks like?
How about the MongoDB persistence code?
#28: This SCENARIO works ONLY for the whole thing or ONE DID.
It does not work for other access pat
#30: CONSEQUENCE: Logic split across SQL query and disassembly
FURTHER complicated with prepared statements & paramterization because it splits up the Big Query
#39: A Series of Document Transformations
Executed in stages
Original input is a collection
Output as a cursor or a collection
Rich Library of Functions
Filter, compute, group, and summarize data
Output of one stage sent to input of next
Operations executed in sequential order
#40: Projection should create a new key rather than removing some
#41: Projection should create a new key rather than removing some
#42: Projection should create a new key rather than removing some
#43: Projection should create a new key rather than removing some
#44: Projection should create a new key rather than removing some
#45: Projection should create a new key rather than removing some
#51: MongoDB was built to address the way the world has changed while preserving the core database capabilities required to build functional apps
MongoDB is the only database that harnesses the innovations of NoSQL and maintains the foundation of relational databases