This is the final webinar of a Back to Basics series that will introduce you to the MongoDB database. This webinar will guide you through production deployment.
Back to Basics 2017: Introduction to ShardingMongoDB
Sharding is a method for distributing data across multiple machines. MongoDB uses sharding to support deployments with very large data sets and high throughput operations by providing the capability for horizontal scaling.
MongoDB auto sharding allows data to be automatically partitioned and distributed across multiple servers (shards) in a MongoDB cluster. The sharding process distributes data by a shard key, automatically balancing data as the system load changes. Queries are routed to the appropriate shards and can be executed in parallel across shards to improve performance. The config servers store metadata about shards and chunk distribution to enable auto sharding functionality.
Sharding in MongoDB allows for horizontal scaling of data and operations across multiple servers. When determining if sharding is needed, factors like available storage, query throughput, and response latency on a single server are considered. The number of shards can be calculated based on total required storage, working memory size, and input/output operations per second across servers. Different types of sharding include range, tag-aware, and hashed sharding. Choosing a high cardinality shard key that matches query patterns is important for performance. Reasons to shard include scaling to large data volumes and query loads, enabling local writes in a globally distributed deployment, and improving backup and restore times.
This document provides an overview of MongoDB sharding. It discusses how MongoDB addresses the need for horizontal scalability as data and throughput needs exceed the capabilities of a single machine. MongoDB uses sharding to partition data across multiple machines or shards. The key points are:
- MongoDB shards or partitions data by a shard key, distributing data ranges across shards for scalability.
- A configuration server stores metadata about sharding setup and chunk distribution. Mongos instances route queries to appropriate shards.
- MongoDB automatically splits and migrates chunks as data grows to balance load across shards.
- Setting up sharding in MongoDB requires minimal configuration and provides a consistent interface like a single database.
Understanding and tuning WiredTiger, the new high performance database engine...Ontico
MongoDB 3.0 introduced the concept of different storage engine. The new engine known as WiredTiger introduces document level MVCC locking, compression and a choice between Btree or LSM indexes. In this talk you will learn about the storage engine architecture and specifically WiredTiger, and how to tune and monitor it for best performance.
MongoDB 3.0 представил новый концепт движков хранения. Новый движок известен как WiredTiger и предоставляет новый уровень документов MVCC фикс, компрессию и выбор между Btree или индексами LSM. В этом докладе вы поймете, как тюнить и мониторить архитектуры движка базы данных, а точнее WiredTiger для получения максимальной производительности.
This document discusses MongoDB sharding. It explains that sharding allows scaling a MongoDB deployment across multiple servers. Key aspects covered include the sharding architecture with config servers, shards, and mongos routers. It also discusses concepts like shard keys, chunks, and how the balancer migrates chunks for even data distribution. The document provides an example demo of setting up a sharded cluster and checking the sharding configuration and status.
MongoDB was designed for humongous amounts of data, with the ability to scale horizontally via sharding. In this session, we’ll look at MongoDB’s approach to partitioning data, and the architecture of a sharded system. We’ll walk you through configuration of a sharded system, and look at how data is balanced across servers and requests are routed.
A New MongoDB Sharding Architecture for Higher Availability and Better Resour...leifwalsh
Most modern databases concern themselves with their ability to scale a workload beyond the power of one machine. But maintaining a database across multiple machines is inherently more complex than it is on a single machine. As soon as scaling out is required, suddenly a lot of scaling out is required, to deal with new problems like index suitability and load balancing.
Write optimized data structures are well-suited to a sharding architecture that delivers higher efficiency than traditional sharding architectures. This talk describes a new sharding architecture for MongoDB applications that can be achieved with write optimized storage like TokuMX's Fractal Tree indexes.
Back to Basics Spanish 4 Introduction to shardingMongoDB
Cómo MongoDB amplía el rendimiento de las operaciones de escritura y maneja grandes tamaño de datos
Cómo crear un sharded cluster básico
Cómo elegir una clave de sharding
Are you in the process of evaluating or migrating to MongoDB? We will cover key aspects of migrating to MongoDB from a RDBMS, including Schema design, Indexing strategies, Data migration approaches as your implementation reaches various SDLC stages, Achieving operational agility through MongoDB Management Services (MMS).
This document discusses how to monitor and maintain a good shard key in MongoDB after initially setting one up. It provides several methods for counting chunks on shards, tracking chunk migration operations over time, and estimating chunk sizes. The author also introduces a ChunkHunter script that can help automate copying chunk metadata and scanning chunk ranges to generate summary statistics.
1) The document discusses sharding time series sensor data from 16,000 traffic sensors across the US to support a nationwide traffic monitoring application.
2) It models the read, write and storage patterns and determines that a sharded cluster is needed to store over 500GB of yearly data that will grow significantly over time.
3) It recommends using a compound shard key of {linkID, date} to distribute writes evenly while enabling targeted queries, and storing summary data in a separate replica set for performance.
As we increasingly build applications to reach global audiences, the scalability and availability of your database across geographic regions becomes a critical consideration in systems selection and design.
Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural overview, design principles, and automation.
This document provides an overview of MongoDB sharding, including how it partitions and distributes data across shards, maintains balanced clusters, and routes queries. The key aspects covered are MongoDB's approach to automatic sharding with minimal configuration needed, the sharding architecture involving config servers, mongos routers and shards, and considerations for choosing an appropriate shard key like cardinality and query patterns.
Sharding in MongoDB allows for scaling of data and queries across multiple servers. When determining the number of shards needed, key factors to consider include total storage requirements, latency needs, and throughput requirements. These are used to calculate the necessary disk capacity, disk throughput, and RAM across shards. Different types of sharding include range, tag-aware, and hashed, with range being best for query isolation. Choosing a high cardinality shard key that matches common queries is important for performance and scalability.
Webinar Back to Basics 3 - Introduzione ai Replica SetMongoDB
Un set di repliche in MongoDB è un gruppo di processi che mantengono copie dei dati su diversi server di database. Assicurano ridondanza e disponibilità elevata e sono la base di tutte le distribuzioni in produzione di MongoDB.
Basic Sharding in MongoDB presented by Shaun VerchMongoDB
This document provides an introduction to MongoDB sharding. It discusses how sharding allows scaling of data and MongoDB's approach to sharding including architecture, configuration, and mechanics. Key points include how sharding partitions data and distributes it across multiple servers, the role of config servers, mongos routers, and shards, and considerations for choosing a shard key to effectively distribute data and queries.
Webinar: Architecting Secure and Compliant Applications with MongoDBMongoDB
High-profile security breaches have become embarrassingly common, but ultimately avoidable. Now more than ever, database security is a critical component of any production application. In this talk you'll learn to secure your deployment in accordance with best practices and compliance regulations. We'll explore the MongoDB Enterprise features which ensure HIPAA and PCI compliance, and protect you against attack, data exposure and a damaged reputation.
The document provides an overview of MongoDB sharding, including:
- Sharding allows horizontal scaling of data by partitioning a database across multiple servers or shards.
- The MongoDB sharding architecture consists of shards to hold data, config servers to store metadata, and mongos processes to route requests.
- Data is partitioned into chunks based on a shard key and chunks can move between shards as the data distribution changes.
MongoDB San Francisco 2013: Basic Sharding in MongoDB presented by Brandon Bl...MongoDB
Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural overview, design principles, and automation.
Determining the root cause of performance issues is a critical task for Operations. In this webinar, we'll show you the tools and techniques for diagnosing and tuning the performance of your MongoDB deployment. Whether you're running into problems or just want to optimize your performance, these skills will be useful.
Redundancy and high availability are the basis for all production deployments. With MongoDB this can be achieved by deploying replica set. In this slides we are exploring how the replication works with MongoDB, why you should use replication, what are the features and go over different deployment use cases. At the end we are comparing some features with MySQL replication and what are the differences between the two
MongoDB World 2016: From the Polls to the Trolls: Seeing What the World Think...MongoDB
YouGov uses MongoDB to store semi-structured survey response data across a globally distributed sharded cluster. They implement tag-aware sharding to partition data by region and leverage migration managers to update schemas across versions. This allows them to provide low-latency reads, scale throughput, and support dynamic surveys worldwide through MongoDB-as-a-Service.
Putting the Go in MongoDB: How We Rebuilt The MongoDB Tools in GoMongoDB
With the release of MongoDB 3.0, the tools (mongodump, mongoimport, mongotop, etc) have been completely re-designed and re-written in Go to improve maintainability and performance. In this section, we'll give an architectural overview of the tools, describe how we used Go's native capacities to improve their parallelism, and also take a deep technical dive into their internals. We'll discuss performance, usability and integration improvements and share advanced techniques for power users. With a better understanding of how the tools work, you should feel comfortable effectively using and contributing to the tools.
In this webinar, we will be covering general best practices for running MongoDB on AWS.
Topics will range from instance selection to storage selection and service distribution to ensure service availability. We will also look at any specific best practices related to using WiredTiger. We will then shift gears and explore recommended strategies for managing your MongoDB instance on AWS.
This session also includes a live Q&A portion during which you are encouraged to ask questions of our team.
Back to Basics Webinar 3: Introduction to Replica SetsMongoDB
This document provides an introduction to MongoDB replica sets, which allow for data redundancy and high availability. It discusses how replica sets work, including the replica set life cycle and how applications should handle writes and queries when using a replica set. Specifically, it explains that the MongoDB driver is responsible for server discovery and monitoring, retry logic, and handling topology changes in a replica set to provide a consistent view of the data to applications.
- MongoDB allows for automatic sharding of data across multiple servers to improve write performance. However, scaling write performance is challenging due to the way B-tree indexes handle random inserts.
- To improve write performance, one can partition data by time or use a hash shard key. However, these have limitations as the data grows large. The best approach is to use a low-cardinality hash prefix combined with a sequential part for the shard key.
- Proper choice of shard key is crucial for scaling MongoDB's write performance as data size increases. Linear scalability is difficult to achieve and alternative databases may be better if extremely high write throughput is required.
Back to Basics, webinar 4: Indicizzazione avanzata, indici testuali e geospaz...MongoDB
Questo è il quarto webinar della serie Back to Basics che ti offrirà un'introduzione al database MongoDB. Questo webinar guarda supporto all'indice full-text e il supporto geospaziale.
The storage engine is responsible for managing how data is stored, both in memory and on disk. MongoDB supports multiple storage engines, as different engines perform better for specific workloads.
View this presentation to understand:
What a storage engine is
How to pick a storage engine
How to configure a storage engine and a replica set
Back to Basics Spanish 4 Introduction to shardingMongoDB
Cómo MongoDB amplía el rendimiento de las operaciones de escritura y maneja grandes tamaño de datos
Cómo crear un sharded cluster básico
Cómo elegir una clave de sharding
Are you in the process of evaluating or migrating to MongoDB? We will cover key aspects of migrating to MongoDB from a RDBMS, including Schema design, Indexing strategies, Data migration approaches as your implementation reaches various SDLC stages, Achieving operational agility through MongoDB Management Services (MMS).
This document discusses how to monitor and maintain a good shard key in MongoDB after initially setting one up. It provides several methods for counting chunks on shards, tracking chunk migration operations over time, and estimating chunk sizes. The author also introduces a ChunkHunter script that can help automate copying chunk metadata and scanning chunk ranges to generate summary statistics.
1) The document discusses sharding time series sensor data from 16,000 traffic sensors across the US to support a nationwide traffic monitoring application.
2) It models the read, write and storage patterns and determines that a sharded cluster is needed to store over 500GB of yearly data that will grow significantly over time.
3) It recommends using a compound shard key of {linkID, date} to distribute writes evenly while enabling targeted queries, and storing summary data in a separate replica set for performance.
As we increasingly build applications to reach global audiences, the scalability and availability of your database across geographic regions becomes a critical consideration in systems selection and design.
Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural overview, design principles, and automation.
This document provides an overview of MongoDB sharding, including how it partitions and distributes data across shards, maintains balanced clusters, and routes queries. The key aspects covered are MongoDB's approach to automatic sharding with minimal configuration needed, the sharding architecture involving config servers, mongos routers and shards, and considerations for choosing an appropriate shard key like cardinality and query patterns.
Sharding in MongoDB allows for scaling of data and queries across multiple servers. When determining the number of shards needed, key factors to consider include total storage requirements, latency needs, and throughput requirements. These are used to calculate the necessary disk capacity, disk throughput, and RAM across shards. Different types of sharding include range, tag-aware, and hashed, with range being best for query isolation. Choosing a high cardinality shard key that matches common queries is important for performance and scalability.
Webinar Back to Basics 3 - Introduzione ai Replica SetMongoDB
Un set di repliche in MongoDB è un gruppo di processi che mantengono copie dei dati su diversi server di database. Assicurano ridondanza e disponibilità elevata e sono la base di tutte le distribuzioni in produzione di MongoDB.
Basic Sharding in MongoDB presented by Shaun VerchMongoDB
This document provides an introduction to MongoDB sharding. It discusses how sharding allows scaling of data and MongoDB's approach to sharding including architecture, configuration, and mechanics. Key points include how sharding partitions data and distributes it across multiple servers, the role of config servers, mongos routers, and shards, and considerations for choosing a shard key to effectively distribute data and queries.
Webinar: Architecting Secure and Compliant Applications with MongoDBMongoDB
High-profile security breaches have become embarrassingly common, but ultimately avoidable. Now more than ever, database security is a critical component of any production application. In this talk you'll learn to secure your deployment in accordance with best practices and compliance regulations. We'll explore the MongoDB Enterprise features which ensure HIPAA and PCI compliance, and protect you against attack, data exposure and a damaged reputation.
The document provides an overview of MongoDB sharding, including:
- Sharding allows horizontal scaling of data by partitioning a database across multiple servers or shards.
- The MongoDB sharding architecture consists of shards to hold data, config servers to store metadata, and mongos processes to route requests.
- Data is partitioned into chunks based on a shard key and chunks can move between shards as the data distribution changes.
MongoDB San Francisco 2013: Basic Sharding in MongoDB presented by Brandon Bl...MongoDB
Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural overview, design principles, and automation.
Determining the root cause of performance issues is a critical task for Operations. In this webinar, we'll show you the tools and techniques for diagnosing and tuning the performance of your MongoDB deployment. Whether you're running into problems or just want to optimize your performance, these skills will be useful.
Redundancy and high availability are the basis for all production deployments. With MongoDB this can be achieved by deploying replica set. In this slides we are exploring how the replication works with MongoDB, why you should use replication, what are the features and go over different deployment use cases. At the end we are comparing some features with MySQL replication and what are the differences between the two
MongoDB World 2016: From the Polls to the Trolls: Seeing What the World Think...MongoDB
YouGov uses MongoDB to store semi-structured survey response data across a globally distributed sharded cluster. They implement tag-aware sharding to partition data by region and leverage migration managers to update schemas across versions. This allows them to provide low-latency reads, scale throughput, and support dynamic surveys worldwide through MongoDB-as-a-Service.
Putting the Go in MongoDB: How We Rebuilt The MongoDB Tools in GoMongoDB
With the release of MongoDB 3.0, the tools (mongodump, mongoimport, mongotop, etc) have been completely re-designed and re-written in Go to improve maintainability and performance. In this section, we'll give an architectural overview of the tools, describe how we used Go's native capacities to improve their parallelism, and also take a deep technical dive into their internals. We'll discuss performance, usability and integration improvements and share advanced techniques for power users. With a better understanding of how the tools work, you should feel comfortable effectively using and contributing to the tools.
In this webinar, we will be covering general best practices for running MongoDB on AWS.
Topics will range from instance selection to storage selection and service distribution to ensure service availability. We will also look at any specific best practices related to using WiredTiger. We will then shift gears and explore recommended strategies for managing your MongoDB instance on AWS.
This session also includes a live Q&A portion during which you are encouraged to ask questions of our team.
Back to Basics Webinar 3: Introduction to Replica SetsMongoDB
This document provides an introduction to MongoDB replica sets, which allow for data redundancy and high availability. It discusses how replica sets work, including the replica set life cycle and how applications should handle writes and queries when using a replica set. Specifically, it explains that the MongoDB driver is responsible for server discovery and monitoring, retry logic, and handling topology changes in a replica set to provide a consistent view of the data to applications.
- MongoDB allows for automatic sharding of data across multiple servers to improve write performance. However, scaling write performance is challenging due to the way B-tree indexes handle random inserts.
- To improve write performance, one can partition data by time or use a hash shard key. However, these have limitations as the data grows large. The best approach is to use a low-cardinality hash prefix combined with a sequential part for the shard key.
- Proper choice of shard key is crucial for scaling MongoDB's write performance as data size increases. Linear scalability is difficult to achieve and alternative databases may be better if extremely high write throughput is required.
Back to Basics, webinar 4: Indicizzazione avanzata, indici testuali e geospaz...MongoDB
Questo è il quarto webinar della serie Back to Basics che ti offrirà un'introduzione al database MongoDB. Questo webinar guarda supporto all'indice full-text e il supporto geospaziale.
The storage engine is responsible for managing how data is stored, both in memory and on disk. MongoDB supports multiple storage engines, as different engines perform better for specific workloads.
View this presentation to understand:
What a storage engine is
How to pick a storage engine
How to configure a storage engine and a replica set
Back to Basics Webinar 3: Schema Design Thinking in DocumentsMongoDB
This is the third webinar of a Back to Basics series that will introduce you to the MongoDB database. This webinar will explain the architecture of document databases.
Back to Basics Webinar 5: Introduction to the Aggregation FrameworkMongoDB
The document provides information about an upcoming webinar on the MongoDB aggregation framework. Key details include:
- The webinar will introduce the aggregation framework and provide an overview of its capabilities for analytics.
- Examples will use a real-world vehicle testing dataset to demonstrate aggregation pipeline stages like $match, $project, and $group.
- Attendees will learn how the aggregation framework provides a simpler way to perform analytics compared to other tools like Spark and Hadoop.
Back to Basics Webinar 2: Your First MongoDB ApplicationMongoDB
The document provides instructions for installing and using MongoDB to build a simple blogging application. It demonstrates how to install MongoDB, connect to it using the mongo shell, insert and query sample data like users and blog posts, update documents to add comments, and more. The goal is to illustrate how to model and interact with data in MongoDB for a basic blogging use case.
Back to Basics Webinar 1: Introduction to NoSQLMongoDB
This is the first webinar of a Back to Basics series that will introduce you to the MongoDB database, what it is, why you would use it, and what you would use it for.
The document is a presentation on MongoDB for developers by Ciro Donato Caiazzo. It introduces MongoDB as a non-relational database for storing JSON documents with a focus on speed, performance, flexibility and scalability. It discusses that MongoDB uses collections for storing groups of data like tables in a SQL database. It also covers how to start with MongoDB by downloading, installing, and running the mongod process before accessing the mongo shell to perform operations like insert, find, update and remove on databases and collections.
Back to Basics Webinar 4: Advanced Indexing, Text and Geospatial IndexesMongoDB
This is the fourth webinar of a Back to Basics series that will introduce you to the MongoDB database. This webinar will introduce you to the aggregation framework.
MongoDB Schema Design (Event: An Evening with MongoDB Houston 3/11/15)MongoDB
The document discusses different data modeling approaches for structuring data in MongoDB, including embedding data versus referencing data in collections. It provides examples of modeling one-to-one, one-to-many, and many-to-many relationships between entities using embedding and referencing. The document recommends different approaches depending on the use case and prioritizes flexibility, performance, and optimal data representation.
This document provides guidance on data modeling for MongoDB databases. It discusses two approaches to representing relationships between documents: embedding related data within documents, and linking documents through references. The document recommends considering both the application's usage patterns and data structure when designing a data model. It provides an overview of key data modeling concepts and factors to balance, and directs readers to additional sections that include data modeling examples and reference documentation.
This tutorial will introduce the features of MongoDB by building a simple location-based application using MongoDB. The tutorial will cover the basics of MongoDB’s document model, query language, map-reduce framework and deployment architecture.
The tutorial will be divided into 5 sections:
Data modeling with MongoDB: documents, collections and databases
Querying your data: simple queries, geospatial queries, and text-searching
Writes and updates: using MongoDB’s atomic update modifiers
Trending and analytics: Using mapreduce and MongoDB’s aggregation framework
Deploying the sample application
Besides the knowledge to start building their own applications with MongoDB, attendees will finish the session with a working application they use to check into locations around Portland from any HTML5 enabled phone!
TUTORIAL PREREQUISITES
Each attendee should have a running version of MongoDB. Preferably the latest unstable release 2.1.x, but any install after 2.0 should be fine. You can dowload MongoDB at http://www.mongodb.org/downloads.
Instructions for installing MongoDB are at http://docs.mongodb.org/manual/installation/.
Additionally we will be building an app in Ruby. Ruby 1.9.3+ is required for this. The current latest version of ruby is 1.9.3-p194.
For windows download the http://rubyinstaller.org/
For OSX download http://unfiniti.com/software/mac/jewelrybox/
For linux most users should know how to for their own distributions.
We will be using the following GEMs and they MUST BE installed ahead of time so you can be ahead of the game and safe in the event that the Internet isn’t accommodating.
bson (1.6.4)
bson_ext (1.6.4)
haml (3.1.4)
mongo (1.6.4)
rack (1.4.1)
rack-protection (1.2.0)
rack shotgun (0.9)
sinatra (1.3.2)
tilt (1.3.3)
Prior ruby experience isn’t required for this. We will NOT be using rails for this app.
Webinar: Getting Started with MongoDB - Back to BasicsMongoDB
Part one an Introduction to MongoDB. Learn how easy it is to start building applications with MongoDB. This session covers key features and functionality of MongoDB and sets out the course of building an application.
Webinarserie: Einführung in MongoDB: “Back to Basics” - Teil 3 - Interaktion ...MongoDB
The document discusses MongoDB basics including:
1) Inserting and querying documents using operators like $lt and $in
2) Returning documents through cursors and using projections to select attributes
3) Updating documents using operators like $push, $inc, and $addToSet along with bucketing and pre-aggregated reports
It also covers durability options like acknowledged writes and waiting for replication.
Webinar: Back to Basics: Thinking in DocumentsMongoDB
New applications, users and inputs demand new types of data, like unstructured, semi-structured and polymorphic data. Adopting MongoDB means adopting to a new, document-based data model.
While most developers have internalized the rules of thumb for designing schemas for relational databases, these rules don't apply to MongoDB. Documents can represent rich data structures, providing lots of viable alternatives to the standard, normalized, relational model. In addition, MongoDB has several unique features, such as atomic updates and indexed array keys, that greatly influence the kinds of schemas that make sense.
In this session, Buzz Moschetti explores how you can take advantage of MongoDB's document model to build modern applications.
This document summarizes a MongoDB webinar on advanced schema design patterns. It introduces common schema design patterns like attribute, subset, computed, and approximation patterns. It discusses how to use these patterns to address issues like large documents with many fields, working sets that don't fit in RAM, high CPU usage from repeated calculations, and changing schemas over time. The webinar provides examples of each pattern and encourages learning a common vocabulary for designing MongoDB schemas by applying these reusable patterns.
Back to Basics: My First MongoDB ApplicationMongoDB
- The document is a slide deck for a webinar on building a basic blogging application using MongoDB.
- It covers MongoDB concepts like documents, collections and indexes. It then demonstrates how to install MongoDB, connect to it using the mongo shell, and insert documents.
- The slide deck proceeds to model a basic blogging application using MongoDB, creating collections for users, articles and comments. It shows how to query, update, and import large amounts of seeded data.
Developing with the Modern App Stack: MEAN and MERN (with Angular2 and ReactJS)MongoDB
The document discusses the MEAN and MERN application stacks. MEAN uses MongoDB, Express, AngularJS, and Node.js while MERN uses MongoDB, Express, React, and Node.js. It describes the components of each stack including MongoDB for data storage, Node.js for the application backend, and either AngularJS or React for the frontend. It also discusses how these stacks enable building applications with universal JavaScript and REST APIs for rapid development and scalability.
MongoDB: Advantages of an Open Source NoSQL DatabaseFITC
Save 10% off ANY FITC event with discount code 'slideshare'
See our upcoming events at www.fitc.ca
OVERVIEW
The presentation will present an overview of the MongoDB NoSQL database, its history and current status as the leading NoSQL database. It will focus on how NoSQL, and in particular MongoDB, benefits developers building big data or web scale applications. Discuss the community around MongoDB and compare it to commercial alternatives. An introduction to installing, configuring and maintaining standalone instances and replica sets will be provided.
Presented live at FITC's Spotlight:MEAN Stack on March 28th, 2014.
More info at FITC.ca
Silicon Valley Code Camp 2015 - Advanced MongoDB - The SequelDaniel Coupal
MongoDB presentation from Silicon Valley Code Camp 2015.
Walkthrough developing, deploying and operating a MongoDB application, avoiding the most common pitfalls.
Webinar: Serie Operazioni per la vostra applicazione - Sessione 6 - Installar...MongoDB
Fai del 2014 l'anno in cui imparare qualcosa di nuovo. Unisciti alla nostra serie di webinar in 8 parti e scopri quanto è facile sviluppare applicazioni con MongoDB. Le sessioni, tenute dai nostri Solutions Architects, vi insegneranno le basi dalla A alla Z, condivideranno le best practice e i trucchi per partire con confidenza. Le sessioni saranno completamente in italiano.
A questo punto avremo fatto l’applicazione. Ora dobbiamo metterla in produzione. Illustreremo le varie architetture per l’alta affidabilità e per la scalabilità orizzontale.
La serie comprende le seguenti sessioni:
10 Giugno 2014 Serie Operazioni per la vostra applicazione - Sessione 7 - Backup e Disaster Recovery:
Questo webinar parlerà delle varie opzioni di backup e di restore. Impara cosa dovresti fare in caso di un guasto e come effettuare le operazioni di backup e recovery dai dati nelle vostre applicazioni.
17 Giugno 2014: Serie Operazioni per la vostra applicazione - Sessione 8 - Monitoraggio e Performance Tuning:
L’ultimo webinar della serie discuterà quali metriche sono importanti e come gestire e monitorare la vostra applicazione per migliorare le performance.
Massimo Brignoli: About the speaker
Massimo ha 44 anni e vive a Milano. Ha lavorato nell’IT per 23 anni per aziende di trasporti, società web e database company. Nel 1998 è entrato una una piccola startup come sviluppatore aiutandola a diventare il più importante portale web italiano, venduto 3 anni più tardi per 700 milioni di dollari. E’ entrato a lavorare in MySQL come pre-vendita viaggiando in tutto il mondo e aiutando le società telecom ad adottare MySQL Cluster. Nel 2012 è entrato in SkySQL come product manager, seguendo l’integrazione con MariaDB e successivamente ha deciso di entrare in MongoDB per seguire nuove sfide professionali. Attualmente e’ Senior Solutions Architect.
This document provides an overview and agenda for a webinar on deploying MongoDB applications in production. It discusses replica sets, including their lifecycle and how they provide high availability and redundancy. It covers topics like write concern, tagging, and read preference modes when using replica sets. The document also discusses scaling MongoDB by sharding the database for large datasets that exceed a single server's resources. It provides examples of how data is partitioned and distributed across shards in a sharded cluster.
The document discusses best practices for deploying MongoDB including sizing hardware with sufficient memory, CPU and I/O; using an appropriate operating system and filesystem; installing and upgrading MongoDB; ensuring durability with replication and backups; implementing security, monitoring performance with tools, and considerations for deploying on Amazon EC2.
The document provides guidance on deploying MongoDB in production environments. It discusses sizing hardware requirements for memory, CPU, and disk I/O. It also covers installing and upgrading MongoDB, considerations for cloud platforms like EC2, security, backups, durability, scaling out, and monitoring. The focus is on performance optimization and ensuring data integrity and high availability.
- Mongo DB is an open-source document database that provides high performance, a rich query language, high availability through clustering, and horizontal scalability through sharding. It stores data in BSON format and supports indexes, backups, and replication.
- Mongo DB is best for operational applications using unstructured or semi-structured data that require large scalability and multi-datacenter support. It is not recommended for applications with complex calculations, finance data, or those that scan large data subsets.
- The next session will provide a security and replication overview and include demonstrations of installation, document creation, queries, indexes, backups, and replication and sharding if possible.
- The document provides guidance on deploying MongoDB including sizing hardware, installing and upgrading MongoDB, configuration considerations for EC2, security, backups, durability, scaling out, and monitoring. Key aspects discussed are profiling and indexing queries for performance, allocating sufficient memory, CPU and disk I/O, using 64-bit OSes, ext4/XFS filesystems, upgrading to even version numbers, and replicating for high availability and backups.
This document discusses how to tune Linux for optimal MongoDB performance. Key points include setting ulimits to allow for many processes and open files, disabling transparent huge pages, using the deadline IO scheduler, setting the dirty ratio and swappiness low, and ensuring consistent clocks with NTP. Monitoring tools like Percona PMM or Prometheus with Grafana dashboards can help analyze MongoDB and system metrics.
As one of our primary data stores, we utilize MongoDB heavily. Early last year our DevOps lead, Chris Merz, submitted some of our use cases to 10gen (http://www.10gen.com/events) as fodder for a presentation at the MongoDB conference in Boulder. The presentation went well enough at the Boulder conference that 10gen asked him to give it again at San Francisco, Seattle and again in Boulder.
Hopefully there are some nuggets in this deck that can help you in your quest to dominate MongoDB.
MongoDB 3.0 introduces a new pluggable storage engine API and a new storage engine called WiredTiger. The engineering team behind WiredTiger team has a long and distinguished career, having architected and built Berkeley DB, now the world's most widely used embedded database. In this talk we will describe our original design goals for WiredTiger, including considerations we made for heavily threaded hardware, large on-chip caches, and SSD storage. We'll also look at some of the latch-free and non-blocking algorithms we've implemented, as well as other techniques that improve scaling, overall throughput and latency. Finally, we'll take a look at some of the features we hope to incorporate into WiredTiger and MongoDB in the future.
MongoDB World 2015 - A Technical Introduction to WiredTigerWiredTiger
MongoDB 3.0 introduces a new pluggable storage engine API and a new storage engine called WiredTiger. The engineering team behind WiredTiger team has a long and distinguished career, having architected and built Berkeley DB, now the world's most widely used embedded database. In this talk we will describe our original design goals for WiredTiger, including considerations we made for heavily threaded hardware, large on-chip caches, and SSD storage. We'll also look at some of the latch-free and non-blocking algorithms we've implemented, as well as other techniques that improve scaling, overall throughput and latency. Finally, we'll take a look at some of the features we hope to incorporate into WiredTiger and MongoDB in the future.
This document provides guidance on considerations for deploying MongoDB in production environments. It covers sizing hardware requirements for memory, CPU and I/O; installing and configuring MongoDB; using MongoDB on Amazon EC2; implementing security, backups, and durability; upgrading MongoDB versions; scaling deployments by sharding; and monitoring MongoDB performance.
Learn how to improve the performance of your Cognos environment. We cover hardware and server specifics, architecture setup, dispatcher tuning, report specific tuning including the Interactive Performance Assistant and more. See the recording and download this deck: https://senturus.com/resources/cognos-analytics-performance-tuning/
Senturus offers a full spectrum of services for business analytics. Our Knowledge Center has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: https://senturus.com/resources/
Back to Basics: Build Something Big With MongoDB MongoDB
1. Replica sets allow for high availability and redundancy by creating copies of data across multiple nodes. The replica set lifestyle involves creation, initialization, handling failures and failovers, and recovery from failures.
2. When developing with replica sets, developers must consider consistency models such as strong consistency, delayed consistency, and write concerns to determine how and when data is written and acknowledged. Tagging and read preferences also allow control over where data is read from and written to.
3. Sharding provides horizontal scalability by partitioning data across multiple machines or replica sets. The data is split into chunks based on a user-defined shard key and distributed across shards. A config server stores metadata about chunk mappings and locations,
In this webinar, we'll discuss the different ways to back up and restore your MongoDB databases in case of a disaster scenario. We'll review manual approaches as well as premium solutions - using MongoDB Management Service (MMS) for managed backup to our cloud, or using Ops Manager at your own cloud/data centers.
Follow on from Back to Basics: An Introduction to NoSQL and MongoDB
•Covers more advanced topics:
Storage Engines
• What storage engines are and how to pick them
Aggregation Framework
• How to deploy advanced analytics processing right inside the database
The BI Connector
• How to create visualizations and dashboards from your MongoDB data
Authentication and Authorisation
• How to secure MongoDB, both on-premise and in the cloud
This document discusses MongoDB and scaling strategies when using MongoDB. It begins with an overview of MongoDB's architecture, data model, and operations. It then describes some early performance issues encountered with MongoDB including issues with durability settings, queries locking servers, and updates moving documents. The document recommends strategies for scaling such as adding more RAM, partitioning data through sharding, and monitoring replication delay closely for disaster recovery.
Evolution of MongoDB Replicaset and Its Best PracticesMydbops
There are several exciting and long-awaited features released from MongoDB 4.0. He will focus on the prime features, the kind of problem it solves, and the best practices for deploying replica sets.
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
This presentation discusses migrating data from other data stores to MongoDB Atlas. It begins by explaining why MongoDB and Atlas are good choices for data management. Several preparation steps are covered, including sizing the target Atlas cluster, increasing the source oplog, and testing connectivity. Live migration, mongomirror, and dump/restore options are presented for migrating between replicasets or sharded clusters. Post-migration steps like monitoring and backups are also discussed. Finally, migrating from other data stores like AWS DocumentDB, Azure CosmosDB, DynamoDB, and relational databases are briefly covered.
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
MongoDB Kubernetes operator and MongoDB Open Service Broker are ready for production operations. Learn about how MongoDB can be used with the most popular container orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications. A demo will show you how easy it is to enable MongoDB clusters as an External Service using the Open Service Broker API for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
Are you new to schema design for MongoDB, or are you looking for a more complete or agile process than what you are following currently? In this talk, we will guide you through the phases of a flexible methodology that you can apply to projects ranging from small to large with very demanding requirements.
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
Humana, like many companies, is tackling the challenge of creating real-time insights from data that is diverse and rapidly changing. This is our journey of how we used MongoDB to combined traditional batch approaches with streaming technologies to provide continues alerting capabilities from real-time data streams.
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
Time series data is increasingly at the heart of modern applications - think IoT, stock trading, clickstreams, social media, and more. With the move from batch to real time systems, the efficient capture and analysis of time series data can enable organizations to better detect and respond to events ahead of their competitors or to improve operational efficiency to reduce cost and risk. Working with time series data is often different from regular application data, and there are best practices you should observe.
This talk covers:
Common components of an IoT solution
The challenges involved with managing time-series data in IoT applications
Different schema designs, and how these affect memory and disk utilization – two critical factors in application performance.
How to query, analyze and present IoT time-series data using MongoDB Compass and MongoDB Charts
At the end of the session, you will have a better understanding of key best practices in managing IoT time-series data with MongoDB.
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
Our clients have unique use cases and data patterns that mandate the choice of a particular strategy. To implement these strategies, it is mandatory that we unlearn a lot of relational concepts while designing and rapidly developing efficient applications on NoSQL. In this session, we will talk about some of our client use cases, the strategies we have adopted, and the features of MongoDB that assisted in implementing these strategies.
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
Encryption is not a new concept to MongoDB. Encryption may occur in-transit (with TLS) and at-rest (with the encrypted storage engine). But MongoDB 4.2 introduces support for Client Side Encryption, ensuring the most sensitive data is encrypted before ever leaving the client application. Even full access to your MongoDB servers is not enough to decrypt this data. And better yet, Client Side Encryption can be enabled at the "flick of a switch".
This session covers using Client Side Encryption in your applications. This includes the necessary setup, how to encrypt data without sacrificing queryability, and what trade-offs to expect.
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
MongoDB Kubernetes operator is ready for prime-time. Learn about how MongoDB can be used with most popular orchestration platform, Kubernetes, and bring self-service, persistent storage to your containerized applications.
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
These days, everyone is expected to be a data analyst. But with so much data available, how can you make sense of it and be sure you're making the best decisions? One great approach is to use data visualizations. In this session, we take a complex dataset and show how the breadth of capabilities in MongoDB Charts can help you turn bits and bytes into insights.
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
When you need to model data, is your first instinct to start breaking it down into rows and columns? Mine used to be too. When you want to develop apps in a modern, agile way, NoSQL databases can be the best option. Come to this talk to learn how to take advantage of all that NoSQL databases have to offer and discover the benefits of changing your mindset from the legacy, tabular way of modeling data. We’ll compare and contrast the terms and concepts in SQL databases and MongoDB, explain the benefits of using MongoDB compared to SQL databases, and walk through data modeling basics so you feel confident as you begin using MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
The document discusses guidelines for ordering fields in compound indexes to optimize query performance. It recommends the E-S-R approach: placing equality fields first, followed by sort fields, and range fields last. This allows indexes to leverage equality matches, provide non-blocking sorts, and minimize scanning. Examples show how indexes ordered by these guidelines can support queries more efficiently by narrowing the search bounds.
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
Aggregation pipeline has been able to power your analysis of data since version 2.2. In 4.2 we added more power and now you can use it for more powerful queries, updates, and outputting your data to existing collections. Come hear how you can do everything with the pipeline, including single-view, ETL, data roll-ups and materialized views.
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
The document describes a methodology for data modeling with MongoDB. It begins by recognizing the differences between document and tabular databases, then outlines a three step methodology: 1) describe the workload by listing queries, 2) identify and model relationships between entities, and 3) apply relevant patterns when modeling for MongoDB. The document uses examples around modeling a coffee shop franchise to illustrate modeling approaches and techniques.
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
Virtual assistants are becoming the new norm when it comes to daily life, with Amazon’s Alexa being the leader in the space. As a developer, not only do you need to make web and mobile compliant applications, but you need to be able to support virtual assistants like Alexa. However, the process isn’t quite the same between the platforms.
How do you handle requests? Where do you store your data and work with it to create meaningful responses with little delay? How much of your code needs to change between platforms?
In this session we’ll see how to design and develop applications known as Skills for Amazon Alexa powered devices using the Go programming language and MongoDB.
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
aux Core Data, appréciée par des centaines de milliers de développeurs. Apprenez ce qui rend Realm spécial et comment il peut être utilisé pour créer de meilleures applications plus rapidement.
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
Il n’a jamais été aussi facile de commander en ligne et de se faire livrer en moins de 48h très souvent gratuitement. Cette simplicité d’usage cache un marché complexe de plus de 8000 milliards de $.
La data est bien connu du monde de la Supply Chain (itinéraires, informations sur les marchandises, douanes,…), mais la valeur de ces données opérationnelles reste peu exploitée. En alliant expertise métier et Data Science, Upply redéfinit les fondamentaux de la Supply Chain en proposant à chacun des acteurs de surmonter la volatilité et l’inefficacité du marché.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
2. Back to Basics 2016 : Webinar 6
Production Deployment
Joe Drumgoole
Director of Developer Advocacy, EMEA
@jdrumgoole
V1.1
3. 3
Recap
• Webinar 1 – Introduction to NoSQL
– NoSQL Types, MongoDB is a document database
• Webinar 2 – My First Application
– Creating databases and collections, CRUD, Indexes and Explain
• Webinar 3 – Schema Design
– Dynamic schema, Embedding approaches
• Webinar 4 – GeoSpatial and Text Indexes
• Webinar 5 – Introduction to the Aggregation Framework
4. 4
Production Deployment
• The process of taking a development project and making it available to end
users
• From a database perspective key concerns:
– Durability
– Scalability
– Data security
– General production notes
14. 14
Write Concern
• Controls the level of acknowledgement a write operation will receive
• w:0 : Once the packet leaves the driver we are done
• w:1 : Return errors generated by the server
• w:majority : Ensure write persisted to a majority of the replica set members
• Journaling
• j:false : Writes do not need to persist to the journal
• j:true : Writes must persist to the journal
23. 23
Setting Up Replica Sets And Sharded Clusters
• Simplest – our management, our servers
– MongoDB Atlas
• Slightly More Complicated – cloud management - your servers
– MongoDB Cloud Manager
• More Complicated Again – on premise management – your servers
– MongoDB Ops Manager
• Hardest
– Do it all yourself with scripts and manual steps – see docs for details
24. 24
Data Security – Defense in Depth
• Use SSL for all connections
• Setup up users, passwords and access roles for all databases
• Use encryption on disk for credit card and/or personally identifiable
information
• Turn on audit if you need to to keep track of who is changing what
• Make sure your monitoring is enabled for all nodes to identify unusable
information
• Don't make your databases globally accessible
25. 25
General Production Notes
• Use the most recent release of MongoDB and keep it patched (64 bit builds only in production)
• Use the Wired Tiger storage engine (its our default and will ensure great performance)
• Configure the appropriate Write Concern
• Configure the connection pool for the server to match the load
• Make sure you have sufficient RAM and CPU for the workload you are expecting
• Prefer SSDs
• Disable NUMA ( Non Uniform Memory Access)
• Allocate SWAP space (otherwise on Linux the OOM daemon may kill you)
• Review Wired Tiger compression options, none, snappy, zlib (snappy is the default)
• If you are using RAID, RAID10 is the recommended configuration
• Use the XFS file system if possible for better performance
• Use a noop scheduler for virtual devices when using VMs
• Synchronise time on all cluster members with NTP
• Turn off the "atime" option for the file-system storing database files
• Increase the file descriptor limit
• Avoid NFS file-systems or file systems that provide underlying replication (SANs)
26. 26
Before You Go Live
• Talk to us
• Get a health check
• Use our automated tools
Most production problems happen because
the customer tried to go it alone