Loggly's own Manoj Chaudhary gave this presentation at AWS DevOps Week 2017 on scaling architecture and DevOps practices as it relates to a real-life Big Data application - Loggly!
Loggly - Tools and Techniques For Logging MicroservicesSolarWinds Loggly
The microservice architecture is taking the tech world by storm. A growing number of businesses are turning towards microservices as a way of handling large workloads in a distributed and scalable way. In these slides, we’ll look at methods for logging microservices and the tools that make them possible.
Loggly compared performance and reliability among three popular debug logging libraries as well as two popular Express request logging libraries. Centralize your Node.js logs with Loggly!
MongoDB .local London 2019: Migrating a Monolith to MongoDB Atlas – Auto Trad...MongoDB
Over the last 12 months at Auto Trader, we have been focusing our energy on moving our on premise workloads to Google Cloud Platform, and that includes our database architecture.
Join me as we explore how we have migrated from on premise MongoDB clusters to a microservice aligned database architecture on MongoDB Atlas using Infrastructure as Code, and how we are integrating MongoDB into the Auto Trader Delivery Platform.
MongoDB .local Chicago 2019: Modern Data Backup and Recovery from On-premises...MongoDB
Whether you are running MongoDB on-premise, self-managing in the cloud, or using MongoDB Atlas, it's critical that you have dependable backups of your data for when things go sideways. This takes infrastructure, storage, and coordination, which can be complex and costly. In MongoDB 4.2, we are changing how backup is architected, helping you reduce the required storage footprint and remove architectural complexities to increase performance and decrease costs. Come to this session to see how we're accomplishing this.
This document discusses blockchain and distributed ledgers using Hyperledger and Apache Brooklyn. It provides an overview of blockchain concepts and the Hyperledger Project, including available distributions from Hyperledger like Hyperledger Fabric, Burrow and Sawtooth. It demonstrates how to deploy a Hyperledger Fabric network on Apache Brooklyn and Kubernetes and shows a sample asset management application built on Hyperledger Fabric using simple Brooklyn YAML definitions.
MongoDB .local London 2019: New Encryption Capabilities in MongoDB 4.2: A Dee...MongoDB
Many applications with high-sensitivity workloads require enhanced technical options to control and limit access to confidential and regulated data. In some cases, system requirements or compliance obligations dictate a separation of duties for staff operating the database and those who maintain the application layer. In cloud-hosted environments, certain data are sometimes deemed too sensitive to store on third-party infrastructure. This is a common pain for system architects in the healthcare, finance, and consumer tech sectors — the benefits of managed, easily expanded compute and storage have been considered unavailable because of data confidentiality and privacy concerns.
This session will take a deep dive into new security capabilities in MongoDB 4.2 that address these scenarios, by enabling native client-side field-level encryption, using customer-managed keys. We will review how confidential data can be securely stored and easily accessed by applications running on MongoDB. Common query design patterns will be presented, with example code demonstrating strong end-to-end encryption in Atlas or on-premise. Implications for developers and others designing systems in regulated environments will be discussed, followed by a Q&A with senior MongoDB security engineers.
How Thermo Fisher is Reducing Data Analysis Times from Days to Minutes with M...MongoDB
Speaker: Joseph Fluckiger, Senior Software Architect, ThermoFisher Scientific
Level: 200 (Intermediate)
Track: Atlas
Mass spectrometry is the gold standard for determining chemical compositions, with spectrometers often measuring the mass of a compound down to a single electron. This level of granularity produces an enormous amount of hierarchical data that doesn't fit well into rows and columns. In this talk, learn how Thermo Fisher is using MongoDB Atlas on AWS to allow their users to get near real-time insights from mass spectrometry experiments – a process that used to take days. We also share how the underlying database service used by Thermo Fisher was built on AWS.
What You Will Learn:
- How we modeled mass spectrometry data to enable us to write and read an enormous about of experimental data efficiently.
- Learn about the best MongoDB tools and patterns for .NET applications.
- Live demo of scaling a MongoDB Atlas cluster with zero down time and visualizing live data from a million dollar Mass Spectrometer stored in MongoDB.
Standardizing Microservice Management With a Service MeshAspen Mesh
This document discusses how a service mesh can be used to standardize microservice management. It describes some of the key capabilities that a service mesh provides, including traffic management, security, and policy and telemetry features. It also provides an overview of the Istio service mesh platform and demonstrates Aspen Mesh, a service mesh solution.
Webinar: Serverless Architectures with AWS Lambda and MongoDB AtlasMongoDB
It’s easier than ever to power serverless architectures with our managed MongoDB as a service, MongoDB Atlas. In this session, we will explore the rise of serverless architectures and how they’ve rapidly integrated into public and private cloud offerings.
Speaker: Raphael Londner, Developer Advocate, MongoDB
Speaker: Paul Sears, Partner Solutions Architect, Amazon Web Services
Level: 200 (Intermediate)
Track: Atlas
In this session, AWS Solutions Architect Paul Sears will provide an overview of AWS Lambda functions, including some key integration use cases with MongoDB Atlas. Developer Advocate Raphael Londner will walk you through how to code a Lambda function connected to MongoDB Atlas, with a specific focus on performance optimization. Raphael will then demonstrate how to orchestrate multiple Lambda functions inside a state machine built on top of AWS Step Functions.
What You Will Learn:
- Common use cases for which MongoDB Atlas + AWS Lambda help you boost developer productivity and minimize operational costs.
- How to write a performance-optimized Lambda function that re-uses MongoDB Atlas database connections across multiple calls in order to speed up queries.
- How AWS Step Functions can help you easily build application workflows to coordinate your Lambda functions.
MongoDB .local London 2019: Nationwide Building Society: Building Mobile Appl...MongoDB
Nationwide Building Society has invested £4.1 billion in technology and is creating 750 new digital roles. They need a "Speed Layer" to support increased mobile and digital activation, open banking, and enhanced customer propositions. The Speed Layer uses Kafka as an event hub, MongoDB as an operational data store, and stream processing to aggregate and enrich data. It provides pre-populated caches and introduces an event-based architecture. To ensure high resilience across two data centers, Nationwide uses independent Kafka, stream processing and MongoDB clusters in each rather than a stretched MongoDB cluster. Nationwide loaded 15 billion transactions into MongoDB by bucketing documents by account and month to improve performance for reads. They conducted proof-of-concepts to
This document discusses microservices and how to build them using Go. It describes the benefits of microservices over monolithic architectures, such as improved scalability, resilience, and ease of deployment. Some key aspects of building microservices with Go that are covered include making services autonomous and focused, using a domain-driven design, implementing service discovery, API gateways, and messaging between services using events. The document also provides guidance on important operational concerns like security, monitoring, and testing when building microservices applications.
Speaker: Drew DiPalma, Product Manager, Cloud, MongoDB
Level: 100 (Beginner)
Track: Developer
Come learn more about MongoDB Stitch – Our new Backend as a Service (BaaS) that makes it easy for developers to create and launch applications across mobile and web platforms. Stitch provides a REST API on top of MongoDB with read, write, and validation rules built-in and full integration with the services you love. This talk will cover the what, why, and how of MongoDB Stitch. We’ll discuss everything from features to the architecture. You’ll walk away knowing how Stitch can kickstart your new project or take your existing application to the next level.
What You Will Learn:
- The basics of MongoDB Stitch and how to use it to kickstart new projects and implement new features in existing projects.
- How to integrate your favorite services with your MongoDB application without writing any code.
MongoDB .local Toronto 2019: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with MongoDB Support where you'll go over the configuration and deployment of an Atlas environment. Setup a service that you can take back in a production-ready state and prepare to unleash your inner genius.
A Free New World: Atlas Free Tier and How It Was Born MongoDB
A Free New World: Atlas Free Tier and How It Was Born
Speaker: Louisa Berger, Senior Software Engineer
Speaker: Vincent Do, Fullstack Engineer, MongoDB
Level: 200 (Intermediate)
Track: How We Build MongoDB
Last year, MongoDB released Atlas – a new Database as as Service product that takes handles running, monitoring, and maintaining your MongoDB deployment in the Cloud. This winter, we added a new Free Tier option to the product, which allows users to try out Atlas with their own real data for free. Lead Automation engineer Louisa Berger and Atlas engineer Vincent Do will talk about how it works behind the scenes, and why you might want to try out Atlas. This talk is intended for developers, and will take you through the technical details of the architecture, and show you the techniques and challenges in building a multi-tenant MongoDB.
What You Will Learn:
- Insights on how/why you should use the Atlas free tier
- How the Atlas free tier was designed and implemented
- Best practices for building a multi-tenant MongoDB application
Powering Microservices with MongoDB, Docker, Kubernetes & Kafka – MongoDB Eur...Andrew Morgan
Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver.
Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you're done. Need an identical copy of your application stack in multiple environments? Build your own container image and then your entire development, test, operations, and support teams can launch an identical clone environment.
Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.
This presentation introduces you to technologies such as Docker, Kubernetes & Kafka which are driving the microservices revolution. Learn about containers and orchestration – and most importantly how to exploit them for stateful services such as MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local Chicago 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
Accelerating a Path to Digital with a Cloud Data StrategyMongoDB
1) The document discusses accelerating a path to digital transformation with a cloud data strategy. It covers topics like the seismic shifts in organizations and application architectures, and the need to rethink underlying data layers.
2) The presentation discusses building an enterprise data fabric at Royal Bank of Scotland using MongoDB to provide data storage, query, and distribution as a service. This simplified development, reduced costs, and improved velocity.
3) MongoDB was presented as the foundation for cloud data strategies, providing the freedom to run applications anywhere while leveraging the benefits of multiple clouds.
Jay Runkel presented a methodology for sizing MongoDB clusters to meet the requirements of an application. The key steps are: 1) Analyze data size and index size, 2) Estimate the working set based on frequently accessed data, 3) Use a simplified model to estimate IOPS and adjust for real-world factors, 4) Calculate the number of shards needed based on storage, memory and IOPS requirements. He demonstrated this process for an application that collects mobile events, requiring a cluster that can store over 200 billion documents with 50,000 IOPS.
Managing Cloud Security Design and Implementation in a Ransomware World MongoDB
1) The document discusses security design and implementation considerations for managing cloud security in a ransomware world.
2) It provides examples of security design reviews that can be conducted, including checking for authentication, authorization, port listening, and firewall configurations.
3) The document also gives examples of how to implement authentication and authorization securely in MongoDB, such as binding to localhost by default and using IP whitelisting.
Speaker: Jay Runkel, Principal Solution Architect, MongoDB
Speaker: Jayson Hurd, Comcast
Level: 200 (Intermediate)
Track: Operations
Comcast is pioneering private-cloud initiatives to bring velocity, elasticity, and self-service to its internal customers. For databases, this means providing the infrastructure and tooling to support a DevOps model enabling application teams to request/provision, monitor, backup, upgrade, and tune their own environments. Using this approach, an extremely small operations team can manage a large number of applications and servers. We will discuss the business goals of velocity, elasticity and self-service, outlining the hidden benefits of this approach. The technical and process architectures will then be explored in detail, demonstrating how a recipe of IaaS, web, Ansible, and MongoDB Ops Manager are used to provide an automated self-service DBaaS platform.
What You Will Learn:
- How to leverage Ops Manager to support a self-service DevOps model.
- Establishing requirements for your own MongoDB as a Service platform.
- Best practices for building a DBaaS for MongoDB.
MongoDB .local Toronto 2019: Keep your Business Safe and Scaling Holistically...MongoDB
Learn how MongoDB on LinuxONE and IBM Cloud Hyper Protect Services can be used to manage highly sensitive and confidential data – pervasively encrypting and securing your environments, consolidating thousands of database instances while serving hundreds of billions of queries a day. At the end of this session you will better understand how managing and scaling large amounts of critical business data can be achieved easily with automatic pervasive encryption of code and data in-flight and at-rest.
If you're a Developer, Architect, DBA or a Business Stakeholder, and your organization is using or planning to use MongoDB on-premise or in the cloud, this session will help you to gain insights into the best way to run MongoDB to keep your business safe and scaling holistically.
Hyperledger Fabric provides a technical foundation for transactional applications across business networks. Fabric Composer is a framework that accelerates the development of applications built on Fabric by allowing developers to model network assets, participants, and transactions from a business perspective. Fabric Composer provides complete development tools including a modeling language, transaction processors, access control lists, and client libraries to integrate Fabric with existing systems and quickly build solutions focused on business needs.
MongoDB .local Bengaluru 2019: The Journey of Migration from Oracle to MongoD...MongoDB
Find out more about our journey of migrating to MongoDB after using Oracle for our hotel search database for over ten years.
- How did we solve the synchronization problem with the Master Database?
- How to get fast search results (even with massive write operations)?
- How other issues were solved
MongoDB 3.4: Deep Dive on Views, Zones, and MongoDB CompassMongoDB
Thomas Boyd, Principal Solutions Architect, MongoDB
MongoDB Evenings San Francisco
March 21, 2017
MongoDB 3.4 was released in November 2016 and contains a wealth of new features that allow developers, DBAs, architects, and data scientists to tackle a wide variety of use cases. After an overview of 3.4, Thomas will provide a deep dive on using MongoDB views to encapsulate complex aggregation logic and to enhance MongoDB security, using zones to create a cross-continent, multi-master MongoDB cluster, and using MongoDB Compass to browse and interact with the data stored in your cluster.
.NET Fest 2017. Андрей Антиликаторов. Проектирование и разработка приложений ...NETFest
В докладе будут рассмотрены принципы и лучшие практики создания гибких масштабируемых приложений на базе Microsoft .NET Core в связке с сервисами Microsoft Azure. Будет рассмотрен ряд полезных подходов, инструментов и библиотек, которые сильно упростят разработку, конфигурирование и развертывание приложений. Также будет уделено внимание некоторым “подводные камням”, с которыми может столкнуться человек, использующий .NET Core.
In this webinar you'll learn about the best practices for Google BigQuery—and how Matillion ETL makes loading your data faster and easier. Find out from our experts how to leverage one of the largest, fastest, and most capable cloud data warehouses to improve your business and save money.
In this webinar:
- Discover how to work fast and efficiently with Google BigQuery
- Find out the best ways to monitor and control costs
- Learn to leverage Matillion ETL and optimize Google BigQuery
- Get tips and tricks for better performance
Google BigQuery is one of the largest, fastest, and most capable cloud data warehouses on the market. In this webinar, we review BigQuery best practices and show you how Matillion ETL can help you get the most out of the platform to gain a competitive edge.
In this webinar:
- Discover how to work quickly and efficiently with Google BigQuery
- Find out the best ways to monitor and control costs
- Hear tips and tricks for loading and transforming massive amounts of data in BigQuery with Matillion ETL
- Get expert advice on improving your performance in BigQuery for quicker data analysis
- Learn how to optimize BigQuery for your marketing analytics needs
Webinar: Serverless Architectures with AWS Lambda and MongoDB AtlasMongoDB
It’s easier than ever to power serverless architectures with our managed MongoDB as a service, MongoDB Atlas. In this session, we will explore the rise of serverless architectures and how they’ve rapidly integrated into public and private cloud offerings.
Speaker: Raphael Londner, Developer Advocate, MongoDB
Speaker: Paul Sears, Partner Solutions Architect, Amazon Web Services
Level: 200 (Intermediate)
Track: Atlas
In this session, AWS Solutions Architect Paul Sears will provide an overview of AWS Lambda functions, including some key integration use cases with MongoDB Atlas. Developer Advocate Raphael Londner will walk you through how to code a Lambda function connected to MongoDB Atlas, with a specific focus on performance optimization. Raphael will then demonstrate how to orchestrate multiple Lambda functions inside a state machine built on top of AWS Step Functions.
What You Will Learn:
- Common use cases for which MongoDB Atlas + AWS Lambda help you boost developer productivity and minimize operational costs.
- How to write a performance-optimized Lambda function that re-uses MongoDB Atlas database connections across multiple calls in order to speed up queries.
- How AWS Step Functions can help you easily build application workflows to coordinate your Lambda functions.
MongoDB .local London 2019: Nationwide Building Society: Building Mobile Appl...MongoDB
Nationwide Building Society has invested £4.1 billion in technology and is creating 750 new digital roles. They need a "Speed Layer" to support increased mobile and digital activation, open banking, and enhanced customer propositions. The Speed Layer uses Kafka as an event hub, MongoDB as an operational data store, and stream processing to aggregate and enrich data. It provides pre-populated caches and introduces an event-based architecture. To ensure high resilience across two data centers, Nationwide uses independent Kafka, stream processing and MongoDB clusters in each rather than a stretched MongoDB cluster. Nationwide loaded 15 billion transactions into MongoDB by bucketing documents by account and month to improve performance for reads. They conducted proof-of-concepts to
This document discusses microservices and how to build them using Go. It describes the benefits of microservices over monolithic architectures, such as improved scalability, resilience, and ease of deployment. Some key aspects of building microservices with Go that are covered include making services autonomous and focused, using a domain-driven design, implementing service discovery, API gateways, and messaging between services using events. The document also provides guidance on important operational concerns like security, monitoring, and testing when building microservices applications.
Speaker: Drew DiPalma, Product Manager, Cloud, MongoDB
Level: 100 (Beginner)
Track: Developer
Come learn more about MongoDB Stitch – Our new Backend as a Service (BaaS) that makes it easy for developers to create and launch applications across mobile and web platforms. Stitch provides a REST API on top of MongoDB with read, write, and validation rules built-in and full integration with the services you love. This talk will cover the what, why, and how of MongoDB Stitch. We’ll discuss everything from features to the architecture. You’ll walk away knowing how Stitch can kickstart your new project or take your existing application to the next level.
What You Will Learn:
- The basics of MongoDB Stitch and how to use it to kickstart new projects and implement new features in existing projects.
- How to integrate your favorite services with your MongoDB application without writing any code.
MongoDB .local Toronto 2019: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with MongoDB Support where you'll go over the configuration and deployment of an Atlas environment. Setup a service that you can take back in a production-ready state and prepare to unleash your inner genius.
A Free New World: Atlas Free Tier and How It Was Born MongoDB
A Free New World: Atlas Free Tier and How It Was Born
Speaker: Louisa Berger, Senior Software Engineer
Speaker: Vincent Do, Fullstack Engineer, MongoDB
Level: 200 (Intermediate)
Track: How We Build MongoDB
Last year, MongoDB released Atlas – a new Database as as Service product that takes handles running, monitoring, and maintaining your MongoDB deployment in the Cloud. This winter, we added a new Free Tier option to the product, which allows users to try out Atlas with their own real data for free. Lead Automation engineer Louisa Berger and Atlas engineer Vincent Do will talk about how it works behind the scenes, and why you might want to try out Atlas. This talk is intended for developers, and will take you through the technical details of the architecture, and show you the techniques and challenges in building a multi-tenant MongoDB.
What You Will Learn:
- Insights on how/why you should use the Atlas free tier
- How the Atlas free tier was designed and implemented
- Best practices for building a multi-tenant MongoDB application
Powering Microservices with MongoDB, Docker, Kubernetes & Kafka – MongoDB Eur...Andrew Morgan
Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver.
Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you're done. Need an identical copy of your application stack in multiple environments? Build your own container image and then your entire development, test, operations, and support teams can launch an identical clone environment.
Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.
This presentation introduces you to technologies such as Docker, Kubernetes & Kafka which are driving the microservices revolution. Learn about containers and orchestration – and most importantly how to exploit them for stateful services such as MongoDB.
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
Join this talk and test session with a MongoDB Developer Advocate where you'll go over the setup, configuration, and deployment of an Atlas environment. Create a service that you can take back in a production-ready state and prepare to unleash your inner genius.
MongoDB .local Chicago 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
Accelerating a Path to Digital with a Cloud Data StrategyMongoDB
1) The document discusses accelerating a path to digital transformation with a cloud data strategy. It covers topics like the seismic shifts in organizations and application architectures, and the need to rethink underlying data layers.
2) The presentation discusses building an enterprise data fabric at Royal Bank of Scotland using MongoDB to provide data storage, query, and distribution as a service. This simplified development, reduced costs, and improved velocity.
3) MongoDB was presented as the foundation for cloud data strategies, providing the freedom to run applications anywhere while leveraging the benefits of multiple clouds.
Jay Runkel presented a methodology for sizing MongoDB clusters to meet the requirements of an application. The key steps are: 1) Analyze data size and index size, 2) Estimate the working set based on frequently accessed data, 3) Use a simplified model to estimate IOPS and adjust for real-world factors, 4) Calculate the number of shards needed based on storage, memory and IOPS requirements. He demonstrated this process for an application that collects mobile events, requiring a cluster that can store over 200 billion documents with 50,000 IOPS.
Managing Cloud Security Design and Implementation in a Ransomware World MongoDB
1) The document discusses security design and implementation considerations for managing cloud security in a ransomware world.
2) It provides examples of security design reviews that can be conducted, including checking for authentication, authorization, port listening, and firewall configurations.
3) The document also gives examples of how to implement authentication and authorization securely in MongoDB, such as binding to localhost by default and using IP whitelisting.
Speaker: Jay Runkel, Principal Solution Architect, MongoDB
Speaker: Jayson Hurd, Comcast
Level: 200 (Intermediate)
Track: Operations
Comcast is pioneering private-cloud initiatives to bring velocity, elasticity, and self-service to its internal customers. For databases, this means providing the infrastructure and tooling to support a DevOps model enabling application teams to request/provision, monitor, backup, upgrade, and tune their own environments. Using this approach, an extremely small operations team can manage a large number of applications and servers. We will discuss the business goals of velocity, elasticity and self-service, outlining the hidden benefits of this approach. The technical and process architectures will then be explored in detail, demonstrating how a recipe of IaaS, web, Ansible, and MongoDB Ops Manager are used to provide an automated self-service DBaaS platform.
What You Will Learn:
- How to leverage Ops Manager to support a self-service DevOps model.
- Establishing requirements for your own MongoDB as a Service platform.
- Best practices for building a DBaaS for MongoDB.
MongoDB .local Toronto 2019: Keep your Business Safe and Scaling Holistically...MongoDB
Learn how MongoDB on LinuxONE and IBM Cloud Hyper Protect Services can be used to manage highly sensitive and confidential data – pervasively encrypting and securing your environments, consolidating thousands of database instances while serving hundreds of billions of queries a day. At the end of this session you will better understand how managing and scaling large amounts of critical business data can be achieved easily with automatic pervasive encryption of code and data in-flight and at-rest.
If you're a Developer, Architect, DBA or a Business Stakeholder, and your organization is using or planning to use MongoDB on-premise or in the cloud, this session will help you to gain insights into the best way to run MongoDB to keep your business safe and scaling holistically.
Hyperledger Fabric provides a technical foundation for transactional applications across business networks. Fabric Composer is a framework that accelerates the development of applications built on Fabric by allowing developers to model network assets, participants, and transactions from a business perspective. Fabric Composer provides complete development tools including a modeling language, transaction processors, access control lists, and client libraries to integrate Fabric with existing systems and quickly build solutions focused on business needs.
MongoDB .local Bengaluru 2019: The Journey of Migration from Oracle to MongoD...MongoDB
Find out more about our journey of migrating to MongoDB after using Oracle for our hotel search database for over ten years.
- How did we solve the synchronization problem with the Master Database?
- How to get fast search results (even with massive write operations)?
- How other issues were solved
MongoDB 3.4: Deep Dive on Views, Zones, and MongoDB CompassMongoDB
Thomas Boyd, Principal Solutions Architect, MongoDB
MongoDB Evenings San Francisco
March 21, 2017
MongoDB 3.4 was released in November 2016 and contains a wealth of new features that allow developers, DBAs, architects, and data scientists to tackle a wide variety of use cases. After an overview of 3.4, Thomas will provide a deep dive on using MongoDB views to encapsulate complex aggregation logic and to enhance MongoDB security, using zones to create a cross-continent, multi-master MongoDB cluster, and using MongoDB Compass to browse and interact with the data stored in your cluster.
.NET Fest 2017. Андрей Антиликаторов. Проектирование и разработка приложений ...NETFest
В докладе будут рассмотрены принципы и лучшие практики создания гибких масштабируемых приложений на базе Microsoft .NET Core в связке с сервисами Microsoft Azure. Будет рассмотрен ряд полезных подходов, инструментов и библиотек, которые сильно упростят разработку, конфигурирование и развертывание приложений. Также будет уделено внимание некоторым “подводные камням”, с которыми может столкнуться человек, использующий .NET Core.
In this webinar you'll learn about the best practices for Google BigQuery—and how Matillion ETL makes loading your data faster and easier. Find out from our experts how to leverage one of the largest, fastest, and most capable cloud data warehouses to improve your business and save money.
In this webinar:
- Discover how to work fast and efficiently with Google BigQuery
- Find out the best ways to monitor and control costs
- Learn to leverage Matillion ETL and optimize Google BigQuery
- Get tips and tricks for better performance
Google BigQuery is one of the largest, fastest, and most capable cloud data warehouses on the market. In this webinar, we review BigQuery best practices and show you how Matillion ETL can help you get the most out of the platform to gain a competitive edge.
In this webinar:
- Discover how to work quickly and efficiently with Google BigQuery
- Find out the best ways to monitor and control costs
- Hear tips and tricks for loading and transforming massive amounts of data in BigQuery with Matillion ETL
- Get expert advice on improving your performance in BigQuery for quicker data analysis
- Learn how to optimize BigQuery for your marketing analytics needs
Analyzing application activities with KSQL and ElasticsearchKatherine Golovinova
IEVGENII VLASYUK, Delivery manager @EPAM
Capturing application events in distributed system is becoming a more and more common task. The modern Kafka eco-system can help to solve this task in an easy and elegant way. A bunch of already-created sources and sinks, SQL syntax and ease of joining data streams will save a lot of time and reduce complexity. However, capturing data is only half the battle. We will also explore how to make use of Elasticsearch to provide advanced analysis of user activities.
Zenko @Cloud Native Foundation London Meetup March 6th 2018Laure Vergeron
Zenko is an open source multi-cloud data controller that provides a single API and dashboard to manage data across multiple cloud storage providers. It includes features like native storage formats, policy-based data management, and metadata search. The enterprise edition adds multi-tenancy, scale-out capabilities, and file services for legacy applications. Zenko was created by Scality as an "inner startup" project to help reinvigorate innovation at the company and has since grown a developer community through meetups and hackathons.
[DataCon.TW 2017] Data Lake: centralize in on-prem vs. decentralize on cloudJeff Hung
Trend Micro has been running big-data in on-premises data center for many years. With Hadoop and its mature ecosystem, we are able to build the centralized Data Lake to serve and fulfill massive data processing loads while manage and encourage new use of data.
In recent years, we are shifting our focus to AWS. Due to the decentralized nature of the cloud, the design and thinking for building Data Lake are different. We must identify what are still important no matter in on-prem or on the cloud, and what could be done differently to embrace the cloud model.
In this talk, we will elaborate Trend Micro considerations and best practices on building Data Lake in on-prem and on cloud. And share our experience on managing peta-byte scale data with many years of evolution.
18. Madhur Hemnani - Result Orientated Innovation with Oracle HR AnalyticsCedar Consulting
The document discusses Oracle's analytics cloud strategy and Oracle Analytics Cloud (OAC) platform. It covers OAC's features such as self-service report creation, data visualization capabilities, and integration with other Oracle products. The document also summarizes how customers can migrate existing on-premise analytics solutions like OBIEE, BICS, and DVCS to OAC. Finally, it provides an overview of Oracle Analytic Cloud - Essbase for flexible analytic applications and management reporting in the cloud.
Scale up - How to build adaptive data systems in the age of viralityJohannes Brandstetter
In this talk we share details about glomex's award-winning data management infrastructure. They’ll show you how a serverless approach can scale automatically to the demands of a highly unpredictable industry as video clips go viral arbitrarily. What is the best architecture for real time data processing? How does a batch-driven BI workflow fit in? What are the key benefits of going to the Cloud? Which AWS services should you use?
This document provides an overview of GrayMatter's Pentaho competency center and services. It describes GrayMatter's decade-long experience partnering with Pentaho to deliver over 100 business analytics solutions to customers worldwide. The document showcases some of GrayMatter's top projects and outlines the services it provides, including business analytics, data integration, Pentaho big data analytics, Pentaho migration services, and embedding Pentaho capabilities. It also includes a case study describing GrayMatter's implementation of a scalable data warehouse for a large customer.
This document discusses building a data-driven log analysis application using LucidWorks SILK. It begins with an introduction to LucidWorks and discusses the continuum of search capabilities from enterprise search to big data search. It then describes how SILK can enable big data search across structured and unstructured data at massive scale. The solution components involve collecting log data from various sources using connectors, ingesting it into Solr, and building visualizations for analysis. It concludes with a demo and contact information.
This document contains a presentation about MongoDB given by Kim Greene. The presentation provides an overview of MongoDB, including what it is, how it compares to relational databases and IBM Domino, and examples of how companies like ThermoFisher and CoreLogic use MongoDB. Specifically, the presentation defines MongoDB as an open-source document database that uses JSON-like documents, provides details on how it supports features like replication, indexing, and security, and highlights how MongoDB enables faster development and better performance at scale compared to relational databases.
This is a slide dump of a talk I gave at the 2017 Chicago Coder Conference (CCC) on June 26th, 2017.
http://www.chicagocoderconference.com/sessions/serverless-scheduled-job-processing/
Next-Generation Spring Data and MongoDBVMware Tanzu
MongoDB 4.0, scheduled for release in Summer 2018, will add support for multi-document ACID transactions. Through snapshot isolation, transactions will provide a consistent view of data, and enforce all-or-nothing execution to maintain data integrity. Transactions in MongoDB will feel just like transactions developers are familiar with from relational databases, and will be easy to add to any application that needs them.
The addition of multi-document transactions will make it easier than ever for developers to address a complete range of use cases with MongoDB, although for many, simply knowing that they are available will provide critical peace of mind. The latest MongoDB 3.6 server release already ships with the main building block for those, client sessions.
The Spring Data team has implemented synchronous and reactive transaction support in preparation for the MongoDB 4.0 release, built on top of MongoDB sessions. Learn more about Spring Data MongoDB, and many new capabilities in the forthcoming Spring Data Lovelace release!
Presenters : Christoph Strobl, Pivotal and Mat Keep, MongoDB
Columbia Migrates from Legacy Data Warehouse to an Open Data Platform with De...Databricks
Columbia is a data-driven enterprise, integrating data from all line-of-business-systems to manage its wholesale and retail businesses. This includes integrating real-time and batch data to better manage purchase orders and generate accurate consumer demand forecasts.
Webinar: Faster Big Data Analytics with MongoDBMongoDB
Learn how to leverage MongoDB and Big Data technologies to derive rich business insight and build high performance business intelligence platforms. This presentation includes:
- Uncovering Opportunities with Big Data analytics
- Challenges of real-time data processing
- Best practices for performance optimization
- Real world case study
This presentation was given in partnership with CIGNEX Datamatics.
Big Data LDN 2017: The New Dominant Companies Are Running on DataMatt Stubbs
The document discusses solutions for deriving value from data through data integration and analytics. It describes three approaches companies have taken: 1) Building a custom machine learning platform like Uber's Michelangelo. 2) Developing custom integrations for a large multinational corporation with many technologies. 3) Implementing a cloud-first enterprise data stack for a 360-degree view of customers. The cloud-first approach provides benefits like scalability, collaboration, and reduced maintenance costs.
The new dominant companies are running on data SnapLogic
The cost of Digital Transformation is dropping rapidly. The technologies and methodologies are evolving to open up new opportunities for new and established corporations to drive business. We will examine specific examples of how and why a combination of robust infrastructure, cloud first and machine learning can take your company to the next level of value and efficiency.
Rich Dill, SnapLogic's enterprise solutions architect, at Big Data LDN 2017.
Insights into Real World Data Management ChallengesDataWorks Summit
Data is your most valuable business asset and it's also your biggest challenge. This challenge and opportunity means we continually face significant road blocks toward becoming a data driven organisation. From the management of data, to the bubbling open source frameworks, the limited industry skills to surmounting time and cost pressures, our challenge in data is big.
We all want and need a “fit for purpose” approach to management of data, especially Big Data, and overcoming the ongoing challenges around the ‘3Vs’ means we get to focus on the most important V - ‘Value’.Come along and join the discussion on how Oracle Big Data Cloud provides Value in the management of data and supports your move toward becoming a data driven organisation.
Speaker
Noble Raveendran, Principal Consultant, Oracle
As part 1 of our TESCHGlobal Data Management Movement, a thought-provoking series for those interested in learning about cutting-edge techniques for managing data more effectively, we dive into what a Data Vault system is and how its methodology can provide full agility and accessibility to your data all in one place.
Insights into Real-world Data Management ChallengesDataWorks Summit
Oracle began with the belief that the foundation of IT was managing information. The Oracle Cloud Platform for Big Data is a natural extension of our belief in the power of data. Oracle’s Integrated Cloud is one cloud for the entire business, meeting everyone’s needs. It’s about Connecting people to information through tools which help you combine and aggregate data from any source.
This session will explore how organizations can transition to the cloud by delivering fully managed and elastic Hadoop and Real-time Streaming cloud services to built robust offerings that provide measurable value to the business. We will explore key data management trends and dive deeper into pain points we are hearing about from our customer base.
The document compares the performance of five popular .NET logging libraries: Log4net, NLog, ELMAH, Microsoft Enterprise Library, and NSpring. It finds that NLog has the fastest performance, logging 100,000 events in 9.33 seconds on average. Log4net has similar ease of setup but slower performance. ELMAH and NSpring are also easy to set up but have slower performance than NLog and Log4net. Microsoft Enterprise Library has the second fastest performance but was more difficult to set up.
Loggly - Case Study - Loggly and Docker Deliver Powerful Monitoring for XAPPm...SolarWinds Loggly
With Loggly, XAPPmedia:
• Standardizes its diagnostic approach with the combination of Docker and Loggly
• Accelerates resolution of software and non-software issues
• Employs proactive alerting to prevent issues from affecting the business
Loggly - Case Study - Stanley Black & Decker Transforms Work with Support fro...SolarWinds Loggly
With Loggly, Stanley Black & Decker:
• Provides team with troubleshooting capabilities for mobile and IoT applications running on traditional and serverless architectures
• Supports performance monitoring, security, and PCI compliance needs
• Enables quick scalability as new innovations are launched
Loggly - Case Study - Loggly and Kubernetes Give Molecule Easy Access to the ...SolarWinds Loggly
With Loggly, Molecule Software:
• Cuts expenses by replacing ELK stack running on EC2 with Loggly
• Accelerates troubleshooting with Loggly Dynamic Field Explorer™
• Reduces the customer impact from issues with proactive alerting
Loggly - Case Study - Datami Keeps Developer Productivity High with LogglySolarWinds Loggly
With Loggly, Datami:
• Saves time and increases productivity with easier troubleshooting
• Keeps developers focused on coding
• Improves quality assurance and performance in commercial trials
• Effectively monitors new deployments
Loggly - Case Study - BEMOBI - Bemobi Monitors the Experience of 500 Million ...SolarWinds Loggly
Bemobi uses Loggly to:
• Gains visibility and troubleshooting capabilities for applications running on autoscaling
Amazon EC2 environments and serverless AWS Lambda compute services
• Accelerates response times with proactive alerting
• Maintains QoS agreements critical to customer billing with log-based reporting
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...SolarWinds Loggly
Agenda for this Presentation
• The challenges of Log Management at scale
• Overview of Loggly’s processing pipeline
• Alternative technologies considered
• Why we love Apache Kafka
• How Kafka has added flexibility to our pipeline

The Challenges of Log Management at Scale
• Big data
– >750 billion events logged to date
– Sustained bursts of 100,000+ events per second
– Data space measured in petabytes
• Need for high fault tolerance
• Near real-time indexing requirements
• Time-series index management
My most important lesson from working on 7+ cloud-based products:
“Failing to prepare for failure is costly... but failing to prepare for success can be even worse”.
Learn from my experience - read this deck to learn the top 6 SaaS mistakes you should avoid.
Rumble Entertainment GDC 2014: Maximizing Revenue Through LoggingSolarWinds Loggly
How do you balance the dynamic needs of millions of concurrent users, support hundreds of developers, optimize a distributed cloud environment, push code releases daily, and stay sane? It starts with visibility into all your data and real-time insight from your log files.
In this session at the 2014 Game Developers Conference, Albert Ho provided a step-by-step outline of how Rumble Entertainment maximizes revenue in online games by:
Instrumenting its games and platform to facilitate global troubleshooting
Gathering insight from log files to identify, quantify and solve operational issues
Conducting transactional tracing starting from player feedback on open social networks to isolate the source, server and all users affected by the issue for remediation
Baking Loggly into the DNA of its games to serve as a single source of truth and extended it to how we work with partners
Correlating real-time visibility directly to revenue to speed decision making and planning
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly SolarWinds Loggly
This document summarizes Loggly's transition from their first generation log management infrastructure to their second generation infrastructure built on Apache Kafka, Twitter Storm, and ElasticSearch on AWS. The first generation faced challenges around tightly coupling event ingestion and indexing. The new system uses Kafka as a persistent queue, Storm for real-time event processing, and ElasticSearch for search and storage. This architecture leverages AWS services like auto-scaling and provisioned IOPS for high availability and scale. The new system provides improved elasticity, multi-tenancy, and a pre-production staging environment.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.