SlideShare a Scribd company logo
Sizing Your HBase 
Cluster 
Lars George | @larsgeorge 
EMEA Chief Architect @ Cloudera
2 
Agenda 
• Introduction 
• Technical Background/Primer 
• Best Practices 
• Summary 
©2014 Cloudera, Inc. All rights reserved.
3 
Who I am… 
Lars George [EMEA Chief Architect] 
• Clouderan since October 2010 
• Hadooper since mid 2007 
• HBase/Whirr Committer (of Hearts) 
• github.com/larsgeorge 
©2014 Cloudera, Inc. All rights reserved.
4 
Bruce Lee: ”As you think, so shall you become.” 
©2014 Cloudera, Inc. All rights reserved.
5 
Introduction 
©2014 Cloudera, Inc. All rights reserved.
6 
HBase Sizing Is... 
• Making the most out of the cluster you have by... 
– Understanding how HBase uses low-level resources 
– Helping HBase understand your use-case by configuring it appropriately - and/or - 
– Design the use-case to help HBase along 
• Being able to gauge how many servers are needed for a given use-case
7 
Technical Background 
“To understand your fear is the beginning of 
really seeing…” 
— Bruce Lee 
©2014 Cloudera, Inc. All rights reserved.
8 
HBase Dilemma 
Although HBase can host many applications, they may require completely opposite 
features 
Events Entities 
Time Series Message Store
9 
Competing Resources 
• Reads and Writes compete for the same low-level resources 
– Disk (HDFS) and Network I/O 
– RPC Handlers and Threads 
– Memory (Java Heap) 
• Otherwise they do exercise completely separate code paths
10 
Memory Sharing 
• By default every region server is dividing its memory (i.e. given maximum heap) 
into 
– 40% for in-memory stores (write ops) 
– 20% (40%) for block caching (reads ops) 
– Remaining space (here 40% or 20%) go towards usual Java heap usage 
• Objects etc. 
• Region information (HFile metadata) 
• Share of memory needs to be tweaked
11 
Writes 
• The cluster size is often determined by the write performance 
– Simple schema design implies writing to all (entities) or only one region (events) 
• Log structured merge trees like 
– Store mutation in in-memory store and write-ahead log 
– Flush out aggregated, sorted maps at specified threshold - or - when under pressure 
– Discard logs with no pending edits 
– Perform regular compactions of store files
12 
Writes: Flushes and Compactions 
Older TIME Newer 
SIZE (MB) 
1000 
750 
500 
250 
0
13 
Flushes 
• Every mutation call (put, delete etc.) causes a check for a flush 
• If threshold is met, flush file to disk and schedule a compaction 
– Try to compact newly flushed files quickly 
• The compaction returns - if necessary - where a region should be split
14 
Compaction Storms 
• Premature flushing because of # of logs or memory pressure 
– Files will be smaller than the configured flush size 
• The background compactions are hard at work merging small flush files into the 
existing, larger store files 
– Rewrite hundreds of MB over and over
15 
Dependencies 
• Flushes happen across all stores/column families, even if just one triggers it 
• The flush size is compared to the size of all stores combined 
– Many column families dilute the size 
– Example: 55MB + 5MB + 4MB
16 
Write-Ahead Log 
• Currently only one per region server 
– Shared across all stores (i.e. column families) 
– Synchronized on file append calls 
• Work being done on mitigating this 
– WAL Compression 
– Multithreaded WAL with Ring Buffer 
– Multiple WAL’s per region server ➜ Start more than one region server per node?
17 
Write-Ahead Log (cont.) 
• Size set to 95% of default block size 
– 64MB or 128MB, but check config! 
• Keep number low to reduce recovery time 
– Limit set to 32, but can be increased 
• Increase size of logs - and/or - increase the number of logs before blocking 
• Compute number based on fill distribution and flush frequencies
18 
Write-Ahead Log (cont.) 
• Writes are synchronized across all stores 
– A large cell in one family can stop all writes of another 
– In this case the RPC handlers go binary, i.e. either work or all block 
• Can be bypassed on writes, but means no real durability and no replication 
– Maybe use coprocessor to restore dependent data sets (preWALRestore)
19 
Some Numbers 
• Typical write performance of HDFS is 35-50MB/s 
Cell Size OPS 
0.5MB 70-100 
100KB 350-500 
10KB 3500-5000 ?? 
1KB 35000-50000 ???? 
This is way to high in practice - Contention!
20 
Some More Numbers 
• Under real world conditions the rate is less, more like 15MB/s or less 
– Thread contention and serialization overhead is cause for massive slow down 
Cell Size OPS 
0.5MB 10 
100KB 100 
10KB 800 
1KB 6000
21 
Write Performance 
• There are many factors to the overall write performance of a cluster 
– Key Distribution ➜ Avoid region hotspot 
– Handlers ➜ Do not pile up too early 
– Write-ahead log ➜ Bottleneck #1 
– Compactions ➜ Badly tuned can cause ever increasing background noise
22 
Cheat Sheet 
• Ensure you have enough or large enough write-ahead logs 
• Ensure you do not oversubscribe available memstore space 
• Ensure to set flush size large enough but not too large 
• Check write-ahead log usage carefully 
• Enable compression to store more data per node 
• Tweak compaction algorithm to peg background I/O at some level 
• Consider putting uneven column families in separate tables 
• Check metrics carefully for block cache, memstore, and all queues
23 
Example: Write to All Regions 
• Java Xmx heap at 10GB 
• Memstore share at 40% (default) 
– 10GB Heap x 0.4 = 4GB 
• Desired flush size at 128MB 
– 4GB / 128MB = 32 regions max! 
• For WAL size of 128MB x 0.95% 
– 4GB / (128MB x 0.95) = ~33 partially uncommitted logs to keep around 
• Region size at 20GB 
– 20GB x 32 regions = 640GB raw storage used
24 
Notes 
• Compute memstore sizes based on number of written-to regions x flush size 
• Compute number of logs to keep based on fill and flush rate 
• Ultimately the capacity is driven by 
– Java Heap 
– Region Count and Size 
– Key Distribution
25 
Reads 
• Locate and route request to appropriate region server 
– Client caches information for faster lookups 
• Eliminate store files if possible using time ranges or Bloom filter 
• Try block cache, if block is missing then load from disk
26 
Seeking with Bloom Filters
27 
Writes: Where’s the Data at? 
Older TIME Newer 
SIZE (MB) 
1000 
750 
500 
250 
0 
Existing Row Mutations 
Unique Row Inserts
28 
Block Cache 
• Use exported metrics to see effectiveness of block cache 
– Check fill and eviction rate, as well as hit ratios ➜ random reads are not ideal 
• Tweak up or down as needed, but watch overall heap usage 
• You absolutely need the block cache 
– Set to 10% at least for short term benefits
29 
Testing: Scans 
HBase scan performance 
• Use available tools to test 
• Determine raw and KeyValue read performance 
– Raw is just bytes, while KeyValue means block parsing 
• Insert data using YCSB, then compact table 
– Single region enforced 
• Two test cases 
– Small data: 1 column with 1 byte value 
– Large(r) data: 1 column with 1KB value 
• About same size for both in total: 15GB 
©2014 Cloudera, Inc. All rights reserved.
30 
Testing: Scans 
©2014 Cloudera, Inc. All rights reserved.
31 
Scan Row Range 
• Set start and end key to limit 
scan size
32 
Best Practices 
“If you spend too much time thinking about a thing, you'll never get it done.” 
— Bruce Lee 
©2014 Cloudera, Inc. All rights reserved.
33 
How to Plan 
Advice on 
• Number of nodes 
• Number of disk and total disk capacity 
• RAM capacity 
• Region sizes and count 
• Compaction tuning 
©2014 Cloudera, Inc. All rights reserved.
34 
Advice on Nodes 
• Use previous example to compute effective storage based on heap size, region 
count and size 
– 10GB heap x 0.4 / 128MB x 20GB = 640GB, if all regions are active 
– Address more storage with read-from-only regions 
• Typical advice is to use more nodes with fewer, smaller disks (6 x 1TB SATA or 
600GB SAS, or SSDs) 
• CPU is not an issue, I/O is (even with compression) 
©2014 Cloudera, Inc. All rights reserved.
35 
Advice on Nodes 
• Memory is not an issue, heap sizes small because of Java Garbage Collection 
limitation 
– Up to 20GB has been used 
– Newer versions of Java should help 
– Use off-heap cache 
• Current servers typically have 48GB+ memory 
©2014 Cloudera, Inc. All rights reserved.
36 
Advice on Tuning 
• Trade off throughput against size of single data points 
– This might cause schema redesign 
• Trade off read performance against write amplification 
– Advise users to understand read/write performance and background write amplification 
Ø This drives the number of nodes needed! 
©2014 Cloudera, Inc. All rights reserved.
37 
Advice on Cluster Sizing 
• Compute the number of nodes needed based on 
– Total storage needed 
– Throughput required for either reads and writes 
• Assume ≈15MB/s minimum for each read and write 
– Increasing the KeyValue sizes improves this 
©2014 Cloudera, Inc. All rights reserved.
38 
Example: Twitter Firehose 
©2014 Cloudera, Inc. All rights reserved.
39 
Example: Consume Data 
©2014 Cloudera, Inc. All rights reserved.
40 
HBase Heap Usage 
• Overall addressable amount of data is driven 
by heap size 
– Only read-from regions need space for indexes, 
filters 
– Written-to regions also need MemStore space 
• Java heap space is limited still as garbage 
collections will cause pauses 
– Typically up to 20GB heap 
– Or invest is pause-less GC
41 
Summary 
“All fixed set patterns are incapable of 
adaptability or pliability. The truth is 
outside of all fixed patterns.” 
— Bruce Lee 
©2014 Cloudera, Inc. All rights reserved.
42 
WHHAT BRUCE? IT DEPENDS? L 
©2014 Cloudera, Inc. All rights reserved.
43 
Checklist 
To plan for the size of an HBase cluster you have to: 
• Know the use-case 
– Read/write mix 
– Expected throughput 
– Retention policy 
• Optimize the schema and compaction strategy 
– Devise a schema that allows for only some regions being written to 
• Take “known” numbers to compute cluster size 
©2014 Cloudera, Inc. All rights reserved.
Thank you 
@larsgeorge
Ad

More Related Content

What's hot (20)

Altinity Quickstart for ClickHouse
Altinity Quickstart for ClickHouseAltinity Quickstart for ClickHouse
Altinity Quickstart for ClickHouse
Altinity Ltd
 
isa architecture
isa architectureisa architecture
isa architecture
AJAL A J
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Edureka!
 
Adventures with the ClickHouse ReplacingMergeTree Engine
Adventures with the ClickHouse ReplacingMergeTree EngineAdventures with the ClickHouse ReplacingMergeTree Engine
Adventures with the ClickHouse ReplacingMergeTree Engine
Altinity Ltd
 
Federated Learning
Federated LearningFederated Learning
Federated Learning
DataWorks Summit
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouse
Altinity Ltd
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clustering
Arshad Farhad
 
Computer Architecture and organization ppt.
Computer Architecture and organization ppt.Computer Architecture and organization ppt.
Computer Architecture and organization ppt.
mali yogesh kumar
 
Tensor Processing Unit (TPU)
Tensor Processing Unit (TPU)Tensor Processing Unit (TPU)
Tensor Processing Unit (TPU)
Antonios Katsarakis
 
"Attention Is All You Need" presented by Maroua Maachou (Veepee)
"Attention Is All You Need" presented by Maroua Maachou (Veepee)"Attention Is All You Need" presented by Maroua Maachou (Veepee)
"Attention Is All You Need" presented by Maroua Maachou (Veepee)
Paris Women in Machine Learning and Data Science
 
DETR ECCV20
DETR ECCV20DETR ECCV20
DETR ECCV20
Mengmeng Xu
 
Intel dpdk Tutorial
Intel dpdk TutorialIntel dpdk Tutorial
Intel dpdk Tutorial
Saifuddin Kaijar
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
MojammilHusain
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
Cloudera, Inc.
 
Big Data Integration
Big Data IntegrationBig Data Integration
Big Data Integration
Hadi Fadlallah
 
Computer Architecture - Program Execution
Computer Architecture - Program ExecutionComputer Architecture - Program Execution
Computer Architecture - Program Execution
Varun Bhargava
 
Sequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learningSequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learning
Roberto Pereira Silveira
 
Comparing Incremental Learning Strategies for Convolutional Neural Networks
Comparing Incremental Learning Strategies for Convolutional Neural NetworksComparing Incremental Learning Strategies for Convolutional Neural Networks
Comparing Incremental Learning Strategies for Convolutional Neural Networks
Vincenzo Lomonaco
 
Cache memory
Cache memoryCache memory
Cache memory
Ahsan Ashfaq
 
Meta learning tutorial
Meta learning tutorialMeta learning tutorial
Meta learning tutorial
Joaquin Vanschoren
 
Altinity Quickstart for ClickHouse
Altinity Quickstart for ClickHouseAltinity Quickstart for ClickHouse
Altinity Quickstart for ClickHouse
Altinity Ltd
 
isa architecture
isa architectureisa architecture
isa architecture
AJAL A J
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Edureka!
 
Adventures with the ClickHouse ReplacingMergeTree Engine
Adventures with the ClickHouse ReplacingMergeTree EngineAdventures with the ClickHouse ReplacingMergeTree Engine
Adventures with the ClickHouse ReplacingMergeTree Engine
Altinity Ltd
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouse
Altinity Ltd
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clustering
Arshad Farhad
 
Computer Architecture and organization ppt.
Computer Architecture and organization ppt.Computer Architecture and organization ppt.
Computer Architecture and organization ppt.
mali yogesh kumar
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
MojammilHusain
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
Cloudera, Inc.
 
Computer Architecture - Program Execution
Computer Architecture - Program ExecutionComputer Architecture - Program Execution
Computer Architecture - Program Execution
Varun Bhargava
 
Sequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learningSequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learning
Roberto Pereira Silveira
 
Comparing Incremental Learning Strategies for Convolutional Neural Networks
Comparing Incremental Learning Strategies for Convolutional Neural NetworksComparing Incremental Learning Strategies for Convolutional Neural Networks
Comparing Incremental Learning Strategies for Convolutional Neural Networks
Vincenzo Lomonaco
 

Viewers also liked (20)

HBase Sizing Notes
HBase Sizing NotesHBase Sizing Notes
HBase Sizing Notes
larsgeorge
 
HBase Operations and Best Practices
HBase Operations and Best PracticesHBase Operations and Best Practices
HBase Operations and Best Practices
Venu Anuganti
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl
 
Hbase at Salesforce.com
Hbase at Salesforce.comHbase at Salesforce.com
Hbase at Salesforce.com
Salesforce Engineering
 
HBase for Architects
HBase for ArchitectsHBase for Architects
HBase for Architects
Nick Dimiduk
 
HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014
larsgeorge
 
Big Data is not Rocket Science
Big Data is not Rocket ScienceBig Data is not Rocket Science
Big Data is not Rocket Science
larsgeorge
 
HBase Status Report - Hadoop Summit Europe 2014
HBase Status Report - Hadoop Summit Europe 2014HBase Status Report - Hadoop Summit Europe 2014
HBase Status Report - Hadoop Summit Europe 2014
larsgeorge
 
Designing Scalable Data Warehouse Using MySQL
Designing Scalable Data Warehouse Using MySQLDesigning Scalable Data Warehouse Using MySQL
Designing Scalable Data Warehouse Using MySQL
Venu Anuganti
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon
 
HBase Application Performance Improvement
HBase Application Performance ImprovementHBase Application Performance Improvement
HBase Application Performance Improvement
Biju Nair
 
Intro to HBase
Intro to HBaseIntro to HBase
Intro to HBase
alexbaranau
 
Spark Streaming Data Pipelines
Spark Streaming Data PipelinesSpark Streaming Data Pipelines
Spark Streaming Data Pipelines
MapR Technologies
 
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Spark Summit
 
HBASE Overview
HBASE OverviewHBASE Overview
HBASE Overview
Sampath Rachakonda
 
Social Networks and the Richness of Data
Social Networks and the Richness of DataSocial Networks and the Richness of Data
Social Networks and the Richness of Data
larsgeorge
 
Ysance conference - cloud computing - aws - 3 mai 2010
Ysance   conference - cloud computing - aws - 3 mai 2010Ysance   conference - cloud computing - aws - 3 mai 2010
Ysance conference - cloud computing - aws - 3 mai 2010
Ysance
 
Hadoop unit
Hadoop unitHadoop unit
Hadoop unit
Khanh Maudoux
 
TriHUG 3/14: HBase in Production
TriHUG 3/14: HBase in ProductionTriHUG 3/14: HBase in Production
TriHUG 3/14: HBase in Production
trihug
 
See who is using MemSQL
See who is using MemSQLSee who is using MemSQL
See who is using MemSQL
jenjermain
 
HBase Sizing Notes
HBase Sizing NotesHBase Sizing Notes
HBase Sizing Notes
larsgeorge
 
HBase Operations and Best Practices
HBase Operations and Best PracticesHBase Operations and Best Practices
HBase Operations and Best Practices
Venu Anuganti
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl
 
HBase for Architects
HBase for ArchitectsHBase for Architects
HBase for Architects
Nick Dimiduk
 
HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014
larsgeorge
 
Big Data is not Rocket Science
Big Data is not Rocket ScienceBig Data is not Rocket Science
Big Data is not Rocket Science
larsgeorge
 
HBase Status Report - Hadoop Summit Europe 2014
HBase Status Report - Hadoop Summit Europe 2014HBase Status Report - Hadoop Summit Europe 2014
HBase Status Report - Hadoop Summit Europe 2014
larsgeorge
 
Designing Scalable Data Warehouse Using MySQL
Designing Scalable Data Warehouse Using MySQLDesigning Scalable Data Warehouse Using MySQL
Designing Scalable Data Warehouse Using MySQL
Venu Anuganti
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon
 
HBase Application Performance Improvement
HBase Application Performance ImprovementHBase Application Performance Improvement
HBase Application Performance Improvement
Biju Nair
 
Spark Streaming Data Pipelines
Spark Streaming Data PipelinesSpark Streaming Data Pipelines
Spark Streaming Data Pipelines
MapR Technologies
 
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Spark Summit
 
Social Networks and the Richness of Data
Social Networks and the Richness of DataSocial Networks and the Richness of Data
Social Networks and the Richness of Data
larsgeorge
 
Ysance conference - cloud computing - aws - 3 mai 2010
Ysance   conference - cloud computing - aws - 3 mai 2010Ysance   conference - cloud computing - aws - 3 mai 2010
Ysance conference - cloud computing - aws - 3 mai 2010
Ysance
 
TriHUG 3/14: HBase in Production
TriHUG 3/14: HBase in ProductionTriHUG 3/14: HBase in Production
TriHUG 3/14: HBase in Production
trihug
 
See who is using MemSQL
See who is using MemSQLSee who is using MemSQL
See who is using MemSQL
jenjermain
 
Ad

Similar to HBase Sizing Guide (20)

In-memory Data Management Trends & Techniques
In-memory Data Management Trends & TechniquesIn-memory Data Management Trends & Techniques
In-memory Data Management Trends & Techniques
Hazelcast
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
Lars Marowsky-Brée
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Lars Marowsky-Brée
 
#GeodeSummit - Off-Heap Storage Current and Future Design
#GeodeSummit - Off-Heap Storage Current and Future Design#GeodeSummit - Off-Heap Storage Current and Future Design
#GeodeSummit - Off-Heap Storage Current and Future Design
PivotalOpenSourceHub
 
The Impala Cookbook
The Impala CookbookThe Impala Cookbook
The Impala Cookbook
Cloudera, Inc.
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Community
 
Responding rapidly when you have 100+ GB data sets in Java
Responding rapidly when you have 100+ GB data sets in JavaResponding rapidly when you have 100+ GB data sets in Java
Responding rapidly when you have 100+ GB data sets in Java
Peter Lawrey
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
Splunk
 
In-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great TasteIn-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great Taste
DataWorks Summit
 
Java one2015 - Work With Hundreds of Hot Terabytes in JVMs
Java one2015 - Work With Hundreds of Hot Terabytes in JVMsJava one2015 - Work With Hundreds of Hot Terabytes in JVMs
Java one2015 - Work With Hundreds of Hot Terabytes in JVMs
Speedment, Inc.
 
Apache Geode Offheap Storage
Apache Geode Offheap StorageApache Geode Offheap Storage
Apache Geode Offheap Storage
PivotalOpenSourceHub
 
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedPGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
Equnix Business Solutions
 
Strata London 2019 Scaling Impala.pptx
Strata London 2019 Scaling Impala.pptxStrata London 2019 Scaling Impala.pptx
Strata London 2019 Scaling Impala.pptx
Manish Maheshwari
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101
Mark Kromer
 
Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014
marvin herrera
 
HBase in Practice
HBase in Practice HBase in Practice
HBase in Practice
DataWorks Summit/Hadoop Summit
 
Strata London 2019 Scaling Impala
Strata London 2019 Scaling ImpalaStrata London 2019 Scaling Impala
Strata London 2019 Scaling Impala
Manish Maheshwari
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practice
larsgeorge
 
MariaDB Performance Tuning and Optimization
MariaDB Performance Tuning and OptimizationMariaDB Performance Tuning and Optimization
MariaDB Performance Tuning and Optimization
MariaDB plc
 
MariaDB Server Performance Tuning & Optimization
MariaDB Server Performance Tuning & OptimizationMariaDB Server Performance Tuning & Optimization
MariaDB Server Performance Tuning & Optimization
MariaDB plc
 
In-memory Data Management Trends & Techniques
In-memory Data Management Trends & TechniquesIn-memory Data Management Trends & Techniques
In-memory Data Management Trends & Techniques
Hazelcast
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
Lars Marowsky-Brée
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Lars Marowsky-Brée
 
#GeodeSummit - Off-Heap Storage Current and Future Design
#GeodeSummit - Off-Heap Storage Current and Future Design#GeodeSummit - Off-Heap Storage Current and Future Design
#GeodeSummit - Off-Heap Storage Current and Future Design
PivotalOpenSourceHub
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Community
 
Responding rapidly when you have 100+ GB data sets in Java
Responding rapidly when you have 100+ GB data sets in JavaResponding rapidly when you have 100+ GB data sets in Java
Responding rapidly when you have 100+ GB data sets in Java
Peter Lawrey
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
Splunk
 
In-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great TasteIn-memory Caching in HDFS: Lower Latency, Same Great Taste
In-memory Caching in HDFS: Lower Latency, Same Great Taste
DataWorks Summit
 
Java one2015 - Work With Hundreds of Hot Terabytes in JVMs
Java one2015 - Work With Hundreds of Hot Terabytes in JVMsJava one2015 - Work With Hundreds of Hot Terabytes in JVMs
Java one2015 - Work With Hundreds of Hot Terabytes in JVMs
Speedment, Inc.
 
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedPGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
Equnix Business Solutions
 
Strata London 2019 Scaling Impala.pptx
Strata London 2019 Scaling Impala.pptxStrata London 2019 Scaling Impala.pptx
Strata London 2019 Scaling Impala.pptx
Manish Maheshwari
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101
Mark Kromer
 
Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014Colvin exadata mistakes_ioug_2014
Colvin exadata mistakes_ioug_2014
marvin herrera
 
Strata London 2019 Scaling Impala
Strata London 2019 Scaling ImpalaStrata London 2019 Scaling Impala
Strata London 2019 Scaling Impala
Manish Maheshwari
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practice
larsgeorge
 
MariaDB Performance Tuning and Optimization
MariaDB Performance Tuning and OptimizationMariaDB Performance Tuning and Optimization
MariaDB Performance Tuning and Optimization
MariaDB plc
 
MariaDB Server Performance Tuning & Optimization
MariaDB Server Performance Tuning & OptimizationMariaDB Server Performance Tuning & Optimization
MariaDB Server Performance Tuning & Optimization
MariaDB plc
 
Ad

More from larsgeorge (8)

Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in HadoopBackup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
larsgeorge
 
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
larsgeorge
 
Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013
larsgeorge
 
HBase and Impala Notes - Munich HUG - 20131017
HBase and Impala Notes - Munich HUG - 20131017HBase and Impala Notes - Munich HUG - 20131017
HBase and Impala Notes - Munich HUG - 20131017
larsgeorge
 
Hadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
Hadoop is dead - long live Hadoop | BiDaTA 2013 GenoaHadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
Hadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
larsgeorge
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
larsgeorge
 
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
larsgeorge
 
Realtime Analytics with Hadoop and HBase
Realtime Analytics with Hadoop and HBaseRealtime Analytics with Hadoop and HBase
Realtime Analytics with Hadoop and HBase
larsgeorge
 
Backup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in HadoopBackup and Disaster Recovery in Hadoop
Backup and Disaster Recovery in Hadoop
larsgeorge
 
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
Data Pipelines in Hadoop - SAP Meetup in Tel Aviv
larsgeorge
 
Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013
larsgeorge
 
HBase and Impala Notes - Munich HUG - 20131017
HBase and Impala Notes - Munich HUG - 20131017HBase and Impala Notes - Munich HUG - 20131017
HBase and Impala Notes - Munich HUG - 20131017
larsgeorge
 
Hadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
Hadoop is dead - long live Hadoop | BiDaTA 2013 GenoaHadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
Hadoop is dead - long live Hadoop | BiDaTA 2013 Genoa
larsgeorge
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
larsgeorge
 
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
From Batch to Realtime with Hadoop - Berlin Buzzwords - June 2012
larsgeorge
 
Realtime Analytics with Hadoop and HBase
Realtime Analytics with Hadoop and HBaseRealtime Analytics with Hadoop and HBase
Realtime Analytics with Hadoop and HBase
larsgeorge
 

Recently uploaded (20)

How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?
Daniel Lehner
 
Complete Guide to Advanced Logistics Management Software in Riyadh.pdf
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfComplete Guide to Advanced Logistics Management Software in Riyadh.pdf
Complete Guide to Advanced Logistics Management Software in Riyadh.pdf
Software Company
 
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...
Impelsys Inc.
 
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager API
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPath Community Berlin: Orchestrator API, Swagger, and Test Manager API
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager API
UiPathCommunity
 
Technology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data AnalyticsTechnology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data Analytics
InData Labs
 
tecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdftecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdf
fjgm517
 
Electronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploitElectronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploit
niftliyevhuseyn
 
HCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser EnvironmentsHCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser Environments
panagenda
 
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfThe Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
Abi john
 
Cyber Awareness overview for 2025 month of security
Cyber Awareness overview for 2025 month of securityCyber Awareness overview for 2025 month of security
Cyber Awareness overview for 2025 month of security
riccardosl1
 
2025-05-Q4-2024-Investor-Presentation.pptx
2025-05-Q4-2024-Investor-Presentation.pptx2025-05-Q4-2024-Investor-Presentation.pptx
2025-05-Q4-2024-Investor-Presentation.pptx
Samuele Fogagnolo
 
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath MaestroDev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
UiPathCommunity
 
Big Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur MorganBig Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur Morgan
Arthur Morgan
 
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxSpecial Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
shyamraj55
 
TrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business ConsultingTrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business Consulting
Trs Labs
 
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
BookNet Canada
 
Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025
Splunk
 
How analogue intelligence complements AI
How analogue intelligence complements AIHow analogue intelligence complements AI
How analogue intelligence complements AI
Paul Rowe
 
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Aqusag Technologies
 
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-UmgebungenHCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
panagenda
 
How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?How Can I use the AI Hype in my Business Context?
How Can I use the AI Hype in my Business Context?
Daniel Lehner
 
Complete Guide to Advanced Logistics Management Software in Riyadh.pdf
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfComplete Guide to Advanced Logistics Management Software in Riyadh.pdf
Complete Guide to Advanced Logistics Management Software in Riyadh.pdf
Software Company
 
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...
Impelsys Inc.
 
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager API
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPath Community Berlin: Orchestrator API, Swagger, and Test Manager API
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager API
UiPathCommunity
 
Technology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data AnalyticsTechnology Trends in 2025: AI and Big Data Analytics
Technology Trends in 2025: AI and Big Data Analytics
InData Labs
 
tecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdftecnologias de las primeras civilizaciones.pdf
tecnologias de las primeras civilizaciones.pdf
fjgm517
 
Electronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploitElectronic_Mail_Attacks-1-35.pdf by xploit
Electronic_Mail_Attacks-1-35.pdf by xploit
niftliyevhuseyn
 
HCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser EnvironmentsHCL Nomad Web – Best Practices and Managing Multiuser Environments
HCL Nomad Web – Best Practices and Managing Multiuser Environments
panagenda
 
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfThe Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdf
Abi john
 
Cyber Awareness overview for 2025 month of security
Cyber Awareness overview for 2025 month of securityCyber Awareness overview for 2025 month of security
Cyber Awareness overview for 2025 month of security
riccardosl1
 
2025-05-Q4-2024-Investor-Presentation.pptx
2025-05-Q4-2024-Investor-Presentation.pptx2025-05-Q4-2024-Investor-Presentation.pptx
2025-05-Q4-2024-Investor-Presentation.pptx
Samuele Fogagnolo
 
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath MaestroDev Dives: Automate and orchestrate your processes with UiPath Maestro
Dev Dives: Automate and orchestrate your processes with UiPath Maestro
UiPathCommunity
 
Big Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur MorganBig Data Analytics Quick Research Guide by Arthur Morgan
Big Data Analytics Quick Research Guide by Arthur Morgan
Arthur Morgan
 
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxSpecial Meetup Edition - TDX Bengaluru Meetup #52.pptx
Special Meetup Edition - TDX Bengaluru Meetup #52.pptx
shyamraj55
 
TrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business ConsultingTrsLabs - Fintech Product & Business Consulting
TrsLabs - Fintech Product & Business Consulting
Trs Labs
 
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
Transcript: #StandardsGoals for 2025: Standards & certification roundup - Tec...
BookNet Canada
 
Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025Splunk Security Update | Public Sector Summit Germany 2025
Splunk Security Update | Public Sector Summit Germany 2025
Splunk
 
How analogue intelligence complements AI
How analogue intelligence complements AIHow analogue intelligence complements AI
How analogue intelligence complements AI
Paul Rowe
 
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...
Aqusag Technologies
 
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-UmgebungenHCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungen
panagenda
 

HBase Sizing Guide

  • 1. Sizing Your HBase Cluster Lars George | @larsgeorge EMEA Chief Architect @ Cloudera
  • 2. 2 Agenda • Introduction • Technical Background/Primer • Best Practices • Summary ©2014 Cloudera, Inc. All rights reserved.
  • 3. 3 Who I am… Lars George [EMEA Chief Architect] • Clouderan since October 2010 • Hadooper since mid 2007 • HBase/Whirr Committer (of Hearts) • github.com/larsgeorge ©2014 Cloudera, Inc. All rights reserved.
  • 4. 4 Bruce Lee: ”As you think, so shall you become.” ©2014 Cloudera, Inc. All rights reserved.
  • 5. 5 Introduction ©2014 Cloudera, Inc. All rights reserved.
  • 6. 6 HBase Sizing Is... • Making the most out of the cluster you have by... – Understanding how HBase uses low-level resources – Helping HBase understand your use-case by configuring it appropriately - and/or - – Design the use-case to help HBase along • Being able to gauge how many servers are needed for a given use-case
  • 7. 7 Technical Background “To understand your fear is the beginning of really seeing…” — Bruce Lee ©2014 Cloudera, Inc. All rights reserved.
  • 8. 8 HBase Dilemma Although HBase can host many applications, they may require completely opposite features Events Entities Time Series Message Store
  • 9. 9 Competing Resources • Reads and Writes compete for the same low-level resources – Disk (HDFS) and Network I/O – RPC Handlers and Threads – Memory (Java Heap) • Otherwise they do exercise completely separate code paths
  • 10. 10 Memory Sharing • By default every region server is dividing its memory (i.e. given maximum heap) into – 40% for in-memory stores (write ops) – 20% (40%) for block caching (reads ops) – Remaining space (here 40% or 20%) go towards usual Java heap usage • Objects etc. • Region information (HFile metadata) • Share of memory needs to be tweaked
  • 11. 11 Writes • The cluster size is often determined by the write performance – Simple schema design implies writing to all (entities) or only one region (events) • Log structured merge trees like – Store mutation in in-memory store and write-ahead log – Flush out aggregated, sorted maps at specified threshold - or - when under pressure – Discard logs with no pending edits – Perform regular compactions of store files
  • 12. 12 Writes: Flushes and Compactions Older TIME Newer SIZE (MB) 1000 750 500 250 0
  • 13. 13 Flushes • Every mutation call (put, delete etc.) causes a check for a flush • If threshold is met, flush file to disk and schedule a compaction – Try to compact newly flushed files quickly • The compaction returns - if necessary - where a region should be split
  • 14. 14 Compaction Storms • Premature flushing because of # of logs or memory pressure – Files will be smaller than the configured flush size • The background compactions are hard at work merging small flush files into the existing, larger store files – Rewrite hundreds of MB over and over
  • 15. 15 Dependencies • Flushes happen across all stores/column families, even if just one triggers it • The flush size is compared to the size of all stores combined – Many column families dilute the size – Example: 55MB + 5MB + 4MB
  • 16. 16 Write-Ahead Log • Currently only one per region server – Shared across all stores (i.e. column families) – Synchronized on file append calls • Work being done on mitigating this – WAL Compression – Multithreaded WAL with Ring Buffer – Multiple WAL’s per region server ➜ Start more than one region server per node?
  • 17. 17 Write-Ahead Log (cont.) • Size set to 95% of default block size – 64MB or 128MB, but check config! • Keep number low to reduce recovery time – Limit set to 32, but can be increased • Increase size of logs - and/or - increase the number of logs before blocking • Compute number based on fill distribution and flush frequencies
  • 18. 18 Write-Ahead Log (cont.) • Writes are synchronized across all stores – A large cell in one family can stop all writes of another – In this case the RPC handlers go binary, i.e. either work or all block • Can be bypassed on writes, but means no real durability and no replication – Maybe use coprocessor to restore dependent data sets (preWALRestore)
  • 19. 19 Some Numbers • Typical write performance of HDFS is 35-50MB/s Cell Size OPS 0.5MB 70-100 100KB 350-500 10KB 3500-5000 ?? 1KB 35000-50000 ???? This is way to high in practice - Contention!
  • 20. 20 Some More Numbers • Under real world conditions the rate is less, more like 15MB/s or less – Thread contention and serialization overhead is cause for massive slow down Cell Size OPS 0.5MB 10 100KB 100 10KB 800 1KB 6000
  • 21. 21 Write Performance • There are many factors to the overall write performance of a cluster – Key Distribution ➜ Avoid region hotspot – Handlers ➜ Do not pile up too early – Write-ahead log ➜ Bottleneck #1 – Compactions ➜ Badly tuned can cause ever increasing background noise
  • 22. 22 Cheat Sheet • Ensure you have enough or large enough write-ahead logs • Ensure you do not oversubscribe available memstore space • Ensure to set flush size large enough but not too large • Check write-ahead log usage carefully • Enable compression to store more data per node • Tweak compaction algorithm to peg background I/O at some level • Consider putting uneven column families in separate tables • Check metrics carefully for block cache, memstore, and all queues
  • 23. 23 Example: Write to All Regions • Java Xmx heap at 10GB • Memstore share at 40% (default) – 10GB Heap x 0.4 = 4GB • Desired flush size at 128MB – 4GB / 128MB = 32 regions max! • For WAL size of 128MB x 0.95% – 4GB / (128MB x 0.95) = ~33 partially uncommitted logs to keep around • Region size at 20GB – 20GB x 32 regions = 640GB raw storage used
  • 24. 24 Notes • Compute memstore sizes based on number of written-to regions x flush size • Compute number of logs to keep based on fill and flush rate • Ultimately the capacity is driven by – Java Heap – Region Count and Size – Key Distribution
  • 25. 25 Reads • Locate and route request to appropriate region server – Client caches information for faster lookups • Eliminate store files if possible using time ranges or Bloom filter • Try block cache, if block is missing then load from disk
  • 26. 26 Seeking with Bloom Filters
  • 27. 27 Writes: Where’s the Data at? Older TIME Newer SIZE (MB) 1000 750 500 250 0 Existing Row Mutations Unique Row Inserts
  • 28. 28 Block Cache • Use exported metrics to see effectiveness of block cache – Check fill and eviction rate, as well as hit ratios ➜ random reads are not ideal • Tweak up or down as needed, but watch overall heap usage • You absolutely need the block cache – Set to 10% at least for short term benefits
  • 29. 29 Testing: Scans HBase scan performance • Use available tools to test • Determine raw and KeyValue read performance – Raw is just bytes, while KeyValue means block parsing • Insert data using YCSB, then compact table – Single region enforced • Two test cases – Small data: 1 column with 1 byte value – Large(r) data: 1 column with 1KB value • About same size for both in total: 15GB ©2014 Cloudera, Inc. All rights reserved.
  • 30. 30 Testing: Scans ©2014 Cloudera, Inc. All rights reserved.
  • 31. 31 Scan Row Range • Set start and end key to limit scan size
  • 32. 32 Best Practices “If you spend too much time thinking about a thing, you'll never get it done.” — Bruce Lee ©2014 Cloudera, Inc. All rights reserved.
  • 33. 33 How to Plan Advice on • Number of nodes • Number of disk and total disk capacity • RAM capacity • Region sizes and count • Compaction tuning ©2014 Cloudera, Inc. All rights reserved.
  • 34. 34 Advice on Nodes • Use previous example to compute effective storage based on heap size, region count and size – 10GB heap x 0.4 / 128MB x 20GB = 640GB, if all regions are active – Address more storage with read-from-only regions • Typical advice is to use more nodes with fewer, smaller disks (6 x 1TB SATA or 600GB SAS, or SSDs) • CPU is not an issue, I/O is (even with compression) ©2014 Cloudera, Inc. All rights reserved.
  • 35. 35 Advice on Nodes • Memory is not an issue, heap sizes small because of Java Garbage Collection limitation – Up to 20GB has been used – Newer versions of Java should help – Use off-heap cache • Current servers typically have 48GB+ memory ©2014 Cloudera, Inc. All rights reserved.
  • 36. 36 Advice on Tuning • Trade off throughput against size of single data points – This might cause schema redesign • Trade off read performance against write amplification – Advise users to understand read/write performance and background write amplification Ø This drives the number of nodes needed! ©2014 Cloudera, Inc. All rights reserved.
  • 37. 37 Advice on Cluster Sizing • Compute the number of nodes needed based on – Total storage needed – Throughput required for either reads and writes • Assume ≈15MB/s minimum for each read and write – Increasing the KeyValue sizes improves this ©2014 Cloudera, Inc. All rights reserved.
  • 38. 38 Example: Twitter Firehose ©2014 Cloudera, Inc. All rights reserved.
  • 39. 39 Example: Consume Data ©2014 Cloudera, Inc. All rights reserved.
  • 40. 40 HBase Heap Usage • Overall addressable amount of data is driven by heap size – Only read-from regions need space for indexes, filters – Written-to regions also need MemStore space • Java heap space is limited still as garbage collections will cause pauses – Typically up to 20GB heap – Or invest is pause-less GC
  • 41. 41 Summary “All fixed set patterns are incapable of adaptability or pliability. The truth is outside of all fixed patterns.” — Bruce Lee ©2014 Cloudera, Inc. All rights reserved.
  • 42. 42 WHHAT BRUCE? IT DEPENDS? L ©2014 Cloudera, Inc. All rights reserved.
  • 43. 43 Checklist To plan for the size of an HBase cluster you have to: • Know the use-case – Read/write mix – Expected throughput – Retention policy • Optimize the schema and compaction strategy – Devise a schema that allows for only some regions being written to • Take “known” numbers to compute cluster size ©2014 Cloudera, Inc. All rights reserved.