SlideShare a Scribd company logo
MongoDB for Time Series Data
Principal Technologist and Technical Director
Chris Biow
@chris_biow
#MongoDBTimeSeries
What is Time Series Data?
Time Series
A time series is a sequence of data points, typically
consisting of successive measurements made over a
time interval.
– Wikipedia j.mp/1yLbf1s
0 2 4 6 8 10 12
time
Time Series Data is Everywhere
• Financial markets pricing (stock ticks)
• Sensors (temperature, pressure, proximity)
• Industrial fleets (location, velocity, operational)
• Social networks (status updates)
• Mobile devices (calls, texts)
• Systems (server logs, application logs)
Time Series Data is Everywhere
• Tool for managing & monitoring MongoDB systems
– 100+ system metrics visualized and alerted
• 35,000+ MongoDB systems submitting data every 60
seconds
• 90% updates, 10% reads
• ~30,000 updates/second
• ~3.2B operations/day
• 8 x86-64 servers
Example: MMS Monitoring
MMS Monitoring Dashboard
Time Series Data at a Higher Level
• Widely applicable data model
• Applies to several different "data use cases"
• Various schema and modeling options
• Application requirements drive schema design
Time Series Data Considerations
• Arrival rate & ingest performance
• Resolution of raw events
• Resolution needed to support
– Applications
– Analysis
– Reporting
• Data retention policies
Data Retention
• How long is data required?
• Strategies for purging data
– TTL collections
– Capped collections
– Batch remove({query})
– Drop collection
• Performance
– Can effectively double write load
– Fragmentation and Record Reuse
– Index updates
Application Requirements
Event Resolution
Analysis
– Dashboards
– Analytics
– Reporting
Data Retention Policies
Event and Query Volumes
Application Requirements
Event Resolution
Analysis
– Dashboards
– Analytics
– Reporting
Data Retention Policies
Event and Query Volumes
Schema Design
Application Requirements
Event Resolution
Analysis
– Dashboards
– Analytics
– Reporting
Data Retention Policies
Event and Query Volumes
Schema Design
Aggregation Queries
Application Requirements
Event Resolution
Analysis
– Dashboards
– Analytics
– Reporting
Data Retention Policies
Event and Query Volumes
Schema Design
Aggregation Queries
Cluster Architecture
Our Mission Today
MongoDB for Time Series Data
MongoDB for Time Series Data
Develop Nationwide traffic monitoring
system
What we want from our data
Charting and Trending
What we want from our data
Historical & Predictive Analysis
What we want from our data
Real Time Traffic Dashboard
Traffic sensors to monitor interstate
conditions
• 16,000 sensors
• Measure
• Speed
• Travel time
• Weather, pavement, and traffic conditions
• Frequency: average one sample per minute
• Support desktop, mobile, and car navigation
systems
Other requirements
• Need to keep 3 year history
• Three data centers
• VA, Chicago, LA
• Need to support 5M simultaneous users
• Peak volume (rush hour)
• Every minute, each request the 10 minute average
speed for 50 sensors
Master Agenda
• Design a MongoDB application for scale
• Use case: traffic data
• Presentation Components
1. Schema Design
2. Aggregation
3. Cluster Architecture
Schema Design
Considerations
Schema Design Goals
• Store raw event data
• Support analytical queries
• Find best compromise of:
– Memory utilization
– Write performance
– Read/analytical query performance
• Accomplish with realistic amount of hardware
Designing For Reading, Writing, …
• Document per …
– event
– minute (average)
– minute (seconds)
– hour
Document Per Event
{
segId: "I495_mile23",
date: ISODate("2013-10-16T22:07:38.000-0500"),
speed: 63
}
• Familiar pattern from relational databases
• Insert-driven workload
• Aggregations computed at application-level
Document Per Minute (Average)
{
segId: "I495_mile23",
date: ISODate("2013-10-16T22:07:00.000-0500"),
speed_count: 18,
speed_sum: 1134,
}
• Pre-aggregate to compute average per minute more easily
• Update-driven workload
• Resolution at the minute-level
• Note: averaging speeds may not be valid for some purposes (average
of averages); used here for simplicity of example.
Document Per Minute (By Second)
{
segId: "I495_mile23",
date: ISODate("2013-10-16T22:07:00.000-0500"),
speed: { 0: 63, 1: 58, …, 58: 66, 59: 64 }
}
• Store per-second data at the minute level
• Update-driven workload
• Pre-allocate structure to avoid document moves
Document Per Hour (By Second)
{
segId: "I495_mile23",
date: ISODate("2013-10-16T22:00:00.000-0500"),
speed: { 0: 63, 1: 58, …, 3598: 45, 3599: 55 }
}
• Store per-second data at the hourly level
• Update-driven workload
• Pre-allocate structure to avoid document moves
• Updating last second requires 3599 steps
Document Per Hour (By Second)
{
segId: "I495_mile23",
date: ISODate("2013-10-16T22:00:00.000-0500"),
speed: {
0: {0: 47, …, 59: 45},
….
59: {0: 65, …, 59: 66} }
}
• Store per-second data at the hourly level with nesting
• Update-driven workload
• Pre-allocate structure to avoid document moves
• Updating last second requires 59+59 steps
Characterizing Write Differences
• Example: data generated every second
• For 1 minute:
• Transition from insert driven to update driven
– Individual writes are smaller
– Performance and concurrency benefits
Document Per Event
60 writes
Document Per Minute
1 write, 59 updates
Characterizing Read Differences
• Example: data generated every second
• Reading data for a single hour requires:
• Read performance is greatly improved
– Optimal with tuned block sizes and read ahead
– Fewer disk seeks
Document Per Event
3600 reads
Document Per Minute
60 reads
Characterizing Memory Differences
• _id index for 1 billion events:
• _id index plus segId and date index:
• Memory requirements significantly reduced
– Fewer shards
– Lower capacity servers
Document Per Event
~32 GB
Document Per Minute
~.5 GB
Document Per Event
~100 GB
Document Per Minute
~2 GB
Traffic Monitoring System
Schema
Quick Analysis
Writes
– 16,000 sensors, 1 insert/update per minute
– 16,000 / 60 = 267 inserts/updates per second
Reads
– 5M simultaneous users
– Each requests 10 minute average for 50 sensors every
minute
Tailor your schema to your
application workload
Reads: Impact of Alternative Schemas
10 minute average query
Schema 1 sensor 50 sensors
1 doc per event 10 500
1 doc per 10 min 1.9 95
1 doc per hour 1.3 65
Query: Find the average speed over the
last ten minutes
10 minute average query with 5M
users
Schema ops/sec
1 doc per event 42M
1 doc per 10 min 8M
1 doc per hour 5.4M
Writes: Impact of alternative schemas
1 Sensor - 1 Hour
Schema Inserts Updates
doc/event 60 0
doc/10 min 6 54
doc/hour 1 59
16000 Sensors – 1 Day
Schema Inserts Updates
doc/event 23M 0
doc/10 min 2.3M 21M
doc/hour .38M 22.7M
Sample Document Structure
Compound, unique
Index identifies the
Individual document
{ _id: ObjectId("5382ccdd58db8b81730344e2"),
segId: "900006",
date: ISODate("2014-03-12T17:00:00Z"),
data: [
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
...
],
conditions: {
status: "Snow / Ice Conditions",
pavement: "Icy Spots",
weather: "Light Snow"
}
}
Memory: Impact of alternative schemas
1 Sensor - 1 Hour
Schema
# of
Documents
Index Size
(bytes)
doc/event 60 4200
doc/10 min 6 420
doc/hour 1 70
16000 Sensors – 1 Day
Schema
# of
Documents Index Size
doc/event 23M 1.3 GB
doc/10 min 2.3M 131 MB
doc/hour .38M 1.4 MB
Sample Document Structure
Saves an extra index
{ _id: "900006:14031217",
data: [
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
...
],
conditions: {
status: "Snow / Ice Conditions",
pavement: "Icy Spots",
weather: "Light Snow"
}
}
{ _id: "900006:14031217",
data: [
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
...
],
conditions: {
status: "Snow / Ice Conditions",
pavement: "Icy Spots",
weather: "Light Snow"
}
}
Sample Document Structure
Range queries:
/^900006:1403/
Regex must be
left-anchored &
case-sensitive
{ _id: "900006:140312",
data: [
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
{ speed: NaN, time: NaN },
...
],
conditions: {
status: "Snow / Ice Conditions",
pavement: "Icy Spots",
weather: "Light Snow"
}
}
Sample Document Structure
Pre-allocated,
60 element array of
per-minute data
Analysis with The Aggregation
Framework
Pipelining operations
Piping command line operations
Pipelining operations
grep
Piping command line operations
Pipelining operations
grep | sort
Piping command line operations
Pipelining operations
grep | sort | uniq
Piping command line operations
Pipelining operations
Piping aggregation operations
Pipelining operations
$match
Piping aggregation operations
Stream of documents
Pipelining operations
$match $group|
Piping aggregation operations
Stream of documents
Pipelining operations
$match $group | $sort|
Piping aggregation operations
Stream of documents
Pipelining operations
$match $group | $sort|
Piping aggregation operations
Stream of documents Result documents
What is the average speed for a
given road segment?
> db.linkData.aggregate(
{ $match: { "_id" : /^20484097:/ } },
{ $project: { "data.speed": 1, segId: 1 } } ,
{ $unwind: "$data"},
{ $group: { _id: "$segId", ave: { $avg: "$data.speed"} } }
);
{ "_id" : 20484097, "ave" : 47.067650676506766 }
What is the average speed for a
given road segment?
Select documents on the target segment
> db.linkData.aggregate(
{ $match: { "_id" : /^20484097:/ } },
{ $project: { "data.speed": 1, segId: 1 } } ,
{ $unwind: "$data"},
{ $group: { _id: "$segId", ave: { $avg: "$data.speed"} } }
);
{ "_id" : 20484097, "ave" : 47.067650676506766 }
What is the average speed for a
given road segment?
Keep only the fields we really need
> db.linkData.aggregate(
{ $match: { "_id" : /^20484097:/ } },
{ $project: { "data.speed": 1, segId: 1 } } ,
{ $unwind: "$data"},
{ $group: { _id: "$segId", ave: { $avg: "$data.speed"} } }
);
{ "_id" : 20484097, "ave" : 47.067650676506766 }
What is the average speed for a
given road segment?
Loop over the array of data points
> db.linkData.aggregate(
{ $match: { "_id" : /^20484097:/ } },
{ $project: { "data.speed": 1, segId: 1 } } ,
{ $unwind: "$data"},
{ $group: { _id: "$segId", ave: { $avg: "$data.speed"} } }
);
{ "_id" : 20484097, "ave" : 47.067650676506766 }
What is the average speed for a
given road segment?
Use the handy $avg operator
> db.linkData.aggregate(
{ $match: { "_id" : /^20484097:/ } },
{ $project: { "data.speed": 1, segId: 1 } } ,
{ $unwind: "$data"},
{ $group: { _id: "$segId", ave: { $avg: "$data.speed"} } }
);
{ "_id" : 20484097, "ave" : 47.067650676506766 }
More Sophisticated Pipelines:
average speed with variance
{ "$project" : {
mean: "$meanSpd",
spdDiffSqrd : {
"$map" : {
"input": {
"$map" : {
"input" : "$speeds",
"as" : "samp",
"in" : { "$subtract" : [ "$$samp", "$meanSpd" ] }
}
},
as: "df", in: { $multiply: [ "$$df", "$$df" ] }
} } } },
{ $unwind: "$spdDiffSqrd" },
{ $group: { _id: mean: "$mean", variance: { $avg: "$spdDiffSqrd" } } }
High Volume Data Feed (HVDF)
High Volume Data Feed (HVDF)
• Framework for time series data
• Validate, store, aggregate, query, purge
• Simple RESTAPI
• Batch ingest
• Tasks
– Indexing
– Data retention
High Volume Data Feed (HVDF)
• Customized via plugins
– Time slicing into collections, purging
– Storage granularity of raw events
– _id generation
– Interceptors
• Open source
– https://github.com/10gen-labs/hvdf
Summary
• Tailor your schema to your application workload
• Bucketing/aggregating events will
– Improve write performance: inserts  updates
– Improve analytics performance: fewer document reads
– Reduce index size  reduce memory requirements
• Aggregation framework for analytic queries
Questions?

More Related Content

What's hot (20)

PPTX
Autoscaling Flink with Reactive Mode
Flink Forward
 
PPT
Aerospike: Key Value Data Access
Aerospike, Inc.
 
PPTX
Intro To Mongo Db
chriskite
 
PPTX
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
PDF
Building large scale transactional data lake using apache hudi
Bill Liu
 
PDF
De Arquitectura Hexagonal a Event Sourcing
Carlos Buenosvinos
 
PDF
Qlik and Confluent Success Stories with Kafka - How Generali and Skechers Kee...
HostedbyConfluent
 
PDF
Streaming SQL with Apache Calcite
Julian Hyde
 
PDF
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
DataWorks Summit
 
PDF
Spark Summit EU talk by Bas Geerdink
Spark Summit
 
PDF
PKS Automation Station...All Aboard: Enabling Team Access to PKS with a Conco...
VMware Tanzu
 
PDF
Can Apache Kafka Replace a Database?
Kai Wähner
 
PDF
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)
Kai Wähner
 
PDF
Airflow at lyft for Airflow summit 2020 conference
Tao Feng
 
PPTX
StreamSet ETL tool
SwapnilSHampi
 
PPTX
Apache flume - an Introduction
Erik Schmiegelow
 
PPTX
Kanban 101 - An Introduction to Planning with Little's Law
Jack Speranza
 
PPTX
Flexible and Real-Time Stream Processing with Apache Flink
DataWorks Summit
 
PDF
ASPgems - kappa architecture
Juantomás García Molina
 
PPS
Presentazione Tesi Marco Ventura
guest335584
 
Autoscaling Flink with Reactive Mode
Flink Forward
 
Aerospike: Key Value Data Access
Aerospike, Inc.
 
Intro To Mongo Db
chriskite
 
Tuning Apache Kafka Connectors for Flink.pptx
Flink Forward
 
Building large scale transactional data lake using apache hudi
Bill Liu
 
De Arquitectura Hexagonal a Event Sourcing
Carlos Buenosvinos
 
Qlik and Confluent Success Stories with Kafka - How Generali and Skechers Kee...
HostedbyConfluent
 
Streaming SQL with Apache Calcite
Julian Hyde
 
Using Spark Streaming and NiFi for the next generation of ETL in the enterprise
DataWorks Summit
 
Spark Summit EU talk by Bas Geerdink
Spark Summit
 
PKS Automation Station...All Aboard: Enabling Team Access to PKS with a Conco...
VMware Tanzu
 
Can Apache Kafka Replace a Database?
Kai Wähner
 
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)
Kai Wähner
 
Airflow at lyft for Airflow summit 2020 conference
Tao Feng
 
StreamSet ETL tool
SwapnilSHampi
 
Apache flume - an Introduction
Erik Schmiegelow
 
Kanban 101 - An Introduction to Planning with Little's Law
Jack Speranza
 
Flexible and Real-Time Stream Processing with Apache Flink
DataWorks Summit
 
ASPgems - kappa architecture
Juantomás García Molina
 
Presentazione Tesi Marco Ventura
guest335584
 

Similar to MongoDB for Time Series Data (20)

PPTX
MongoDB for Time Series Data: Setting the Stage for Sensor Management
MongoDB
 
PPTX
MongoDB for Time Series Data Part 1: Setting the Stage for Sensor Management
MongoDB
 
PPTX
Mongo db 2.4 time series data - Brignoli
Codemotion
 
PPTX
MongoDB Best Practices
Lewis Lin 🦊
 
PPTX
Webinar: Best Practices for Getting Started with MongoDB
MongoDB
 
PPTX
Webinar: MongoDB Use Cases within the Oil, Gas, and Energy Industries
MongoDB
 
PPTX
Cloud Security Monitoring and Spark Analytics
amesar0
 
PPTX
Optimizing industrial operations using the big data ecosystem
DataWorks Summit
 
PPTX
Codemotion Milano 2014 - MongoDB and the Internet of Things
Massimo Brignoli
 
PDF
Design and Implementation of A Data Stream Management System
Erdi Olmezogullari
 
PDF
Monitoring and Scaling Redis at DataDog - Ilan Rabinovitch, DataDog
Redis Labs
 
PDF
Stream Processing in SmartNews #jawsdays
SmartNews, Inc.
 
PDF
Intelligent Monitoring
Intelie
 
PPTX
Azure Stream Analytics : Analyse Data in Motion
Ruhani Arora
 
PPTX
Shikha fdp 62_14july2017
Dr. Shikha Mehta
 
PDF
Primitive Pursuits: Slaying Latency with Low-Level Primitives and Instructions
ScyllaDB
 
PDF
TimeSeries Machine Learning - PyData London 2025
Suyash Joshi
 
PDF
Barga IC2E & IoTDI'16 Keynote
Roger Barga
 
PPTX
MongoDB IoT City Tour STUTTGART: Managing the Database Complexity, by Arthur ...
MongoDB
 
PPTX
MongoDB IoT City Tour EINDHOVEN: Managing the Database Complexity
MongoDB
 
MongoDB for Time Series Data: Setting the Stage for Sensor Management
MongoDB
 
MongoDB for Time Series Data Part 1: Setting the Stage for Sensor Management
MongoDB
 
Mongo db 2.4 time series data - Brignoli
Codemotion
 
MongoDB Best Practices
Lewis Lin 🦊
 
Webinar: Best Practices for Getting Started with MongoDB
MongoDB
 
Webinar: MongoDB Use Cases within the Oil, Gas, and Energy Industries
MongoDB
 
Cloud Security Monitoring and Spark Analytics
amesar0
 
Optimizing industrial operations using the big data ecosystem
DataWorks Summit
 
Codemotion Milano 2014 - MongoDB and the Internet of Things
Massimo Brignoli
 
Design and Implementation of A Data Stream Management System
Erdi Olmezogullari
 
Monitoring and Scaling Redis at DataDog - Ilan Rabinovitch, DataDog
Redis Labs
 
Stream Processing in SmartNews #jawsdays
SmartNews, Inc.
 
Intelligent Monitoring
Intelie
 
Azure Stream Analytics : Analyse Data in Motion
Ruhani Arora
 
Shikha fdp 62_14july2017
Dr. Shikha Mehta
 
Primitive Pursuits: Slaying Latency with Low-Level Primitives and Instructions
ScyllaDB
 
TimeSeries Machine Learning - PyData London 2025
Suyash Joshi
 
Barga IC2E & IoTDI'16 Keynote
Roger Barga
 
MongoDB IoT City Tour STUTTGART: Managing the Database Complexity, by Arthur ...
MongoDB
 
MongoDB IoT City Tour EINDHOVEN: Managing the Database Complexity
MongoDB
 
Ad

More from MongoDB (20)

PDF
MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB
 
PDF
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB
 
PDF
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB
 
PDF
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB
 
PDF
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB
 
PDF
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB
 
PDF
MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB
 
PDF
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB
 
PDF
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB
 
PDF
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB
 
PDF
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB
 
PDF
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB
 
PDF
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB
 
PDF
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB
 
MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB
 
Ad

Recently uploaded (20)

PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
SFWelly Summer 25 Release Highlights July 2025
Anna Loughnan Colquhoun
 
PPTX
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
Wojciech Ciemski for Top Cyber News MAGAZINE. June 2025
Dr. Ludmila Morozova-Buss
 
PPT
Interview paper part 3, It is based on Interview Prep
SoumyadeepGhosh39
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
PDF
Persuasive AI: risks and opportunities in the age of digital debate
Speck&Tech
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PPTX
✨Unleashing Collaboration: Salesforce Channels & Community Power in Patna!✨
SanjeetMishra29
 
PDF
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
PDF
Complete Network Protection with Real-Time Security
L4RGINDIA
 
PDF
Building Resilience with Digital Twins : Lessons from Korea
SANGHEE SHIN
 
PDF
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
Human-centred design in online workplace learning and relationship to engagem...
Tracy Tang
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
SFWelly Summer 25 Release Highlights July 2025
Anna Loughnan Colquhoun
 
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Wojciech Ciemski for Top Cyber News MAGAZINE. June 2025
Dr. Ludmila Morozova-Buss
 
Interview paper part 3, It is based on Interview Prep
SoumyadeepGhosh39
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
Persuasive AI: risks and opportunities in the age of digital debate
Speck&Tech
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
✨Unleashing Collaboration: Salesforce Channels & Community Power in Patna!✨
SanjeetMishra29
 
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
Complete Network Protection with Real-Time Security
L4RGINDIA
 
Building Resilience with Digital Twins : Lessons from Korea
SANGHEE SHIN
 
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
Human-centred design in online workplace learning and relationship to engagem...
Tracy Tang
 

MongoDB for Time Series Data

  • 1. MongoDB for Time Series Data Principal Technologist and Technical Director Chris Biow @chris_biow #MongoDBTimeSeries
  • 2. What is Time Series Data?
  • 3. Time Series A time series is a sequence of data points, typically consisting of successive measurements made over a time interval. – Wikipedia j.mp/1yLbf1s 0 2 4 6 8 10 12 time
  • 4. Time Series Data is Everywhere • Financial markets pricing (stock ticks) • Sensors (temperature, pressure, proximity) • Industrial fleets (location, velocity, operational) • Social networks (status updates) • Mobile devices (calls, texts) • Systems (server logs, application logs)
  • 5. Time Series Data is Everywhere
  • 6. • Tool for managing & monitoring MongoDB systems – 100+ system metrics visualized and alerted • 35,000+ MongoDB systems submitting data every 60 seconds • 90% updates, 10% reads • ~30,000 updates/second • ~3.2B operations/day • 8 x86-64 servers Example: MMS Monitoring
  • 8. Time Series Data at a Higher Level • Widely applicable data model • Applies to several different "data use cases" • Various schema and modeling options • Application requirements drive schema design
  • 9. Time Series Data Considerations • Arrival rate & ingest performance • Resolution of raw events • Resolution needed to support – Applications – Analysis – Reporting • Data retention policies
  • 10. Data Retention • How long is data required? • Strategies for purging data – TTL collections – Capped collections – Batch remove({query}) – Drop collection • Performance – Can effectively double write load – Fragmentation and Record Reuse – Index updates
  • 11. Application Requirements Event Resolution Analysis – Dashboards – Analytics – Reporting Data Retention Policies Event and Query Volumes
  • 12. Application Requirements Event Resolution Analysis – Dashboards – Analytics – Reporting Data Retention Policies Event and Query Volumes Schema Design
  • 13. Application Requirements Event Resolution Analysis – Dashboards – Analytics – Reporting Data Retention Policies Event and Query Volumes Schema Design Aggregation Queries
  • 14. Application Requirements Event Resolution Analysis – Dashboards – Analytics – Reporting Data Retention Policies Event and Query Volumes Schema Design Aggregation Queries Cluster Architecture
  • 18. Develop Nationwide traffic monitoring system
  • 19. What we want from our data Charting and Trending
  • 20. What we want from our data Historical & Predictive Analysis
  • 21. What we want from our data Real Time Traffic Dashboard
  • 22. Traffic sensors to monitor interstate conditions • 16,000 sensors • Measure • Speed • Travel time • Weather, pavement, and traffic conditions • Frequency: average one sample per minute • Support desktop, mobile, and car navigation systems
  • 23. Other requirements • Need to keep 3 year history • Three data centers • VA, Chicago, LA • Need to support 5M simultaneous users • Peak volume (rush hour) • Every minute, each request the 10 minute average speed for 50 sensors
  • 24. Master Agenda • Design a MongoDB application for scale • Use case: traffic data • Presentation Components 1. Schema Design 2. Aggregation 3. Cluster Architecture
  • 26. Schema Design Goals • Store raw event data • Support analytical queries • Find best compromise of: – Memory utilization – Write performance – Read/analytical query performance • Accomplish with realistic amount of hardware
  • 27. Designing For Reading, Writing, … • Document per … – event – minute (average) – minute (seconds) – hour
  • 28. Document Per Event { segId: "I495_mile23", date: ISODate("2013-10-16T22:07:38.000-0500"), speed: 63 } • Familiar pattern from relational databases • Insert-driven workload • Aggregations computed at application-level
  • 29. Document Per Minute (Average) { segId: "I495_mile23", date: ISODate("2013-10-16T22:07:00.000-0500"), speed_count: 18, speed_sum: 1134, } • Pre-aggregate to compute average per minute more easily • Update-driven workload • Resolution at the minute-level • Note: averaging speeds may not be valid for some purposes (average of averages); used here for simplicity of example.
  • 30. Document Per Minute (By Second) { segId: "I495_mile23", date: ISODate("2013-10-16T22:07:00.000-0500"), speed: { 0: 63, 1: 58, …, 58: 66, 59: 64 } } • Store per-second data at the minute level • Update-driven workload • Pre-allocate structure to avoid document moves
  • 31. Document Per Hour (By Second) { segId: "I495_mile23", date: ISODate("2013-10-16T22:00:00.000-0500"), speed: { 0: 63, 1: 58, …, 3598: 45, 3599: 55 } } • Store per-second data at the hourly level • Update-driven workload • Pre-allocate structure to avoid document moves • Updating last second requires 3599 steps
  • 32. Document Per Hour (By Second) { segId: "I495_mile23", date: ISODate("2013-10-16T22:00:00.000-0500"), speed: { 0: {0: 47, …, 59: 45}, …. 59: {0: 65, …, 59: 66} } } • Store per-second data at the hourly level with nesting • Update-driven workload • Pre-allocate structure to avoid document moves • Updating last second requires 59+59 steps
  • 33. Characterizing Write Differences • Example: data generated every second • For 1 minute: • Transition from insert driven to update driven – Individual writes are smaller – Performance and concurrency benefits Document Per Event 60 writes Document Per Minute 1 write, 59 updates
  • 34. Characterizing Read Differences • Example: data generated every second • Reading data for a single hour requires: • Read performance is greatly improved – Optimal with tuned block sizes and read ahead – Fewer disk seeks Document Per Event 3600 reads Document Per Minute 60 reads
  • 35. Characterizing Memory Differences • _id index for 1 billion events: • _id index plus segId and date index: • Memory requirements significantly reduced – Fewer shards – Lower capacity servers Document Per Event ~32 GB Document Per Minute ~.5 GB Document Per Event ~100 GB Document Per Minute ~2 GB
  • 37. Quick Analysis Writes – 16,000 sensors, 1 insert/update per minute – 16,000 / 60 = 267 inserts/updates per second Reads – 5M simultaneous users – Each requests 10 minute average for 50 sensors every minute
  • 38. Tailor your schema to your application workload
  • 39. Reads: Impact of Alternative Schemas 10 minute average query Schema 1 sensor 50 sensors 1 doc per event 10 500 1 doc per 10 min 1.9 95 1 doc per hour 1.3 65 Query: Find the average speed over the last ten minutes 10 minute average query with 5M users Schema ops/sec 1 doc per event 42M 1 doc per 10 min 8M 1 doc per hour 5.4M
  • 40. Writes: Impact of alternative schemas 1 Sensor - 1 Hour Schema Inserts Updates doc/event 60 0 doc/10 min 6 54 doc/hour 1 59 16000 Sensors – 1 Day Schema Inserts Updates doc/event 23M 0 doc/10 min 2.3M 21M doc/hour .38M 22.7M
  • 41. Sample Document Structure Compound, unique Index identifies the Individual document { _id: ObjectId("5382ccdd58db8b81730344e2"), segId: "900006", date: ISODate("2014-03-12T17:00:00Z"), data: [ { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, ... ], conditions: { status: "Snow / Ice Conditions", pavement: "Icy Spots", weather: "Light Snow" } }
  • 42. Memory: Impact of alternative schemas 1 Sensor - 1 Hour Schema # of Documents Index Size (bytes) doc/event 60 4200 doc/10 min 6 420 doc/hour 1 70 16000 Sensors – 1 Day Schema # of Documents Index Size doc/event 23M 1.3 GB doc/10 min 2.3M 131 MB doc/hour .38M 1.4 MB
  • 43. Sample Document Structure Saves an extra index { _id: "900006:14031217", data: [ { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, ... ], conditions: { status: "Snow / Ice Conditions", pavement: "Icy Spots", weather: "Light Snow" } }
  • 44. { _id: "900006:14031217", data: [ { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, ... ], conditions: { status: "Snow / Ice Conditions", pavement: "Icy Spots", weather: "Light Snow" } } Sample Document Structure Range queries: /^900006:1403/ Regex must be left-anchored & case-sensitive
  • 45. { _id: "900006:140312", data: [ { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, { speed: NaN, time: NaN }, ... ], conditions: { status: "Snow / Ice Conditions", pavement: "Icy Spots", weather: "Light Snow" } } Sample Document Structure Pre-allocated, 60 element array of per-minute data
  • 46. Analysis with The Aggregation Framework
  • 49. Pipelining operations grep | sort Piping command line operations
  • 50. Pipelining operations grep | sort | uniq Piping command line operations
  • 52. Pipelining operations $match Piping aggregation operations Stream of documents
  • 53. Pipelining operations $match $group| Piping aggregation operations Stream of documents
  • 54. Pipelining operations $match $group | $sort| Piping aggregation operations Stream of documents
  • 55. Pipelining operations $match $group | $sort| Piping aggregation operations Stream of documents Result documents
  • 56. What is the average speed for a given road segment? > db.linkData.aggregate( { $match: { "_id" : /^20484097:/ } }, { $project: { "data.speed": 1, segId: 1 } } , { $unwind: "$data"}, { $group: { _id: "$segId", ave: { $avg: "$data.speed"} } } ); { "_id" : 20484097, "ave" : 47.067650676506766 }
  • 57. What is the average speed for a given road segment? Select documents on the target segment > db.linkData.aggregate( { $match: { "_id" : /^20484097:/ } }, { $project: { "data.speed": 1, segId: 1 } } , { $unwind: "$data"}, { $group: { _id: "$segId", ave: { $avg: "$data.speed"} } } ); { "_id" : 20484097, "ave" : 47.067650676506766 }
  • 58. What is the average speed for a given road segment? Keep only the fields we really need > db.linkData.aggregate( { $match: { "_id" : /^20484097:/ } }, { $project: { "data.speed": 1, segId: 1 } } , { $unwind: "$data"}, { $group: { _id: "$segId", ave: { $avg: "$data.speed"} } } ); { "_id" : 20484097, "ave" : 47.067650676506766 }
  • 59. What is the average speed for a given road segment? Loop over the array of data points > db.linkData.aggregate( { $match: { "_id" : /^20484097:/ } }, { $project: { "data.speed": 1, segId: 1 } } , { $unwind: "$data"}, { $group: { _id: "$segId", ave: { $avg: "$data.speed"} } } ); { "_id" : 20484097, "ave" : 47.067650676506766 }
  • 60. What is the average speed for a given road segment? Use the handy $avg operator > db.linkData.aggregate( { $match: { "_id" : /^20484097:/ } }, { $project: { "data.speed": 1, segId: 1 } } , { $unwind: "$data"}, { $group: { _id: "$segId", ave: { $avg: "$data.speed"} } } ); { "_id" : 20484097, "ave" : 47.067650676506766 }
  • 61. More Sophisticated Pipelines: average speed with variance { "$project" : { mean: "$meanSpd", spdDiffSqrd : { "$map" : { "input": { "$map" : { "input" : "$speeds", "as" : "samp", "in" : { "$subtract" : [ "$$samp", "$meanSpd" ] } } }, as: "df", in: { $multiply: [ "$$df", "$$df" ] } } } } }, { $unwind: "$spdDiffSqrd" }, { $group: { _id: mean: "$mean", variance: { $avg: "$spdDiffSqrd" } } }
  • 62. High Volume Data Feed (HVDF)
  • 63. High Volume Data Feed (HVDF) • Framework for time series data • Validate, store, aggregate, query, purge • Simple RESTAPI • Batch ingest • Tasks – Indexing – Data retention
  • 64. High Volume Data Feed (HVDF) • Customized via plugins – Time slicing into collections, purging – Storage granularity of raw events – _id generation – Interceptors • Open source – https://github.com/10gen-labs/hvdf
  • 65. Summary • Tailor your schema to your application workload • Bucketing/aggregating events will – Improve write performance: inserts  updates – Improve analytics performance: fewer document reads – Reduce index size  reduce memory requirements • Aggregation framework for analytic queries

Editor's Notes

  • #4: Data produced at regular intervals, ordered in time. Want to capture this data and build an application.
  • #7: Need to clarify the new flavors of MMS?
  • #11: A special index type supports the implementation of TTL collections. TTL relies on a background thread in mongod that reads the date-typed values in the index and removes expired documents from the collection.
  • #17: Wind speed and direction sensor Antenna for communications Traffic speed and traffic count sensor Pan-tilt-zoom color camera Precipitation and visibility sensor Air temperature and Relative Humidity sensor Road surface temperature sensor and sub surface temperature sensor below pavement
  • #18: 511ny.org Many states have 511 systems, data provided by dialing 511 and/or via webapp
  • #23: Assumptions/requirements for what we're going to spec out for this imaginary time series application
  • #24: Should I axe the 3 data centers bullet since we don't go into replication?
  • #30: Use findAndModify with the $inc operator 63 mph average
  • #34: *** clarify 2nd to last bullet
  • #36: How did we get these numbers…db.collection.stats() totalIndexSize, indexSizes []
  • #38: Point out 1 doc per minute granularity, not per second 5M users performing 10 minute average
  • #40: Need to practice this
  • #42: Compound unique index on segId & date update field used to identify new documents for aggregation
  • #43: Need to redo these index sizes based on different data types for segId?