SlideShare a Scribd company logo
Adam Kawa
Data Engineer @ Spotify
Hadoop Operations
Powered By …
Hadoop
1. How many times has Coldplay been streamed this
month?
2. How many times was “Get Lucky” streamed during
first 24h?
3. Who was the most popular artist in NYC last week?
Labels, Advertisers, Partners
1. What song to recommend Jay-Z when he wakes
up?
2. Is Adam Kawa bored with Coldplay today?
3. How to get Arun to subscribe to Spotify
Premium?
Data Scientists
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
(Big) Data At Spotify
■ Data generated by +24M monthly active users
and for users!
- 2.2 TB of compressed data from users per day
- 64 TB of data generated in Hadoop each day
(triplicated)
Data Infrastructure At Spotify
■ Apache Hadoop YARN
■ Many other systems including
- Kafka, Cassandra, Storm, Luigi in production
- Giraph, Tez, Spark in the evaluation mode
■ Probably the largest commercial Hadoop cluster in
Europe!
- 694 heterogeneous nodes
- 14.25 PB of data consumed
- ~12.000 jobs each day
Apache Hadoop
March 2013
Tricky questions were asked!
1. How many servers do you need to buy to survive
one year?
2. What will you do to use them efficiently?
3. If we agree, don’t come back to us this year! OK?
Finance Department
■ One of Data Engineers responsible for answering
these questions!
Adam Kawa
■ Examples of how to analyze various metrics, logs
and files
- generated by Hadoop
- using Hadoop
- to understand Hadoop
- to avoid guesstimates!
The Topic Of This Talk
■ This knowledge can be useful to
- measure how fast HDFS is growing
- define an empirical retention policy
- measure the performance of jobs
- optimize the scheduler
- and more
What To Use It For
1. Analyzing HDFS
2. Analyzing MapReduce and YARN
Agenda
HDFS
Garbage Collection On
The NameNode
“ We don’t have any full GC pauses on the NN.
Our GC stops the NN for less than 100 msec,
on average!
:) ”
Adam Kawa @ Hadoop User Mailing List
December 16th, 2013
“ Today, between 12:05 and 13:00
we had 5 full GC pauses on the NN.
They stopped the NN for 34min47sec in total!
:( ”
Adam Kawa @ Spotify office, Stockholm
January 13th, 2014
What happened
between 12:05 and 13:00?
The NameNode was receiving the block reports from
all the DataNodes
Quick Answer!
1. We started the NN when the DNs were running
Detailed Answer
1. We started the NN when the DNs were running
2. 502 DNs immediately registered to the NN
■ Within 1.2 sec (based on logs from the DNs)
Detailed Answer
1. We started the NN when the DNs were running
2. 502 DNs immediately registered to the NN
■ Within 1.2 sec (based on logs from the DNs)
3. 502 DNs started sending the block reports
■ dfs.blockreport.initialDelay = 30 minutes
■ 17 block reports per minute (on average)
■ +831K blocks in each block report (on average)
Detailed Answer
1. We started the NN when the DNs were running
2. 502 DNs immediately registered to the NN
■ Within 1.2 sec (based on logs from the DNs)
3. 502 DNs started sending the block reports
■ dfs.blockreport.initialDelay = 30 minutes
■ 17 block reports per minute (on average)
■ +831K blocks in each block report (on average)
4. This generated a high memory pressure on the NN
■ The NN ran into Full GC !!!
Detailed Answer
Hadoop told us everything!
■ Enable GC logging for the NameNode
■ Visualize e.g. GCViewer
■ Analyze memory usage patterns, GC pauses,
misconfiguration
Collecting The GC Stats
Time
This blue line
shows the heap
used by the NN
Loading
FsImage
Start replaying
Edit logs
First block report
processed
25 block reports
processed
131 block reports
processed
5min 39sec of
Full GC
40 block reports
processed
Next Full GC
Next Full GC
!!!
CMS collector starts
at 98.5% of heap…
We fixed that !
What happened in HDFS
between mid-December 2013
and mid-January 2014?
HDFS
HDFS Metadata
■ A persistent checkpoint of HDFS metadata
■ It contains information about files + directories
■ A binary file
HDFS FsImage File
■ Converts the content of FsImage to text formats
- e.g. a tab-separated file or XML
■ Output is easily analyzed by any tools
- e.g. Pig, Hive
HDFS Offline Image Viewer
50% of the data
created during last 3
months
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
Anything interesting?
1. NO data added that day
2. Many more files added after
The migration to YARN
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
Where
did
the small files
come from?
■ An interactive
visualization of
data in HDFS
Twitter's
HDFS-DU
/app-logs
avg. file size = 253 KB
no. of dirs = 595K
no. of files = 60.6M
■ Statistics broken down by user/group name
■ Candidates for duplicate datasets
■ Inefficient MapReduce jobs
- Small files
- Skewed files
More Uses Of FsImage File
■ You can analyze FsImage to learn how fast HDFS
grows
■ You can combine it with “external” datasets
- number of daily/monthly active users
- total size of logs generated by users
- number of queries / day run by data analysts
Advanced HDFS Capacity Planning
■ You can also use ''trend button'' in Ganglia
Simplified HDFS Capacity Planning
If we do
NOTHING,
we might fill
the cluster
in
September
...
What will we do
to survive longer
than September?
HDFS
Retention
Question
How many days after creation, a dataset is not
accessed anymore?
Retention Policy
Question
How many days after creation, a dataset is not
accessed anymore?
Possible Solution
■ You can use modification_time and access_time
from FsImage
Empirical Retention Policy
■ Logs and core datasets are accessed even many
years after creation
■ Many reports are not accessed even a hour after
creation
■ Most intermediate datasets needed less than a
week
■ 10% of data has not been accessed for a year
Our Retention Facts
HDFS
Hot Datasets
■ Some files/directories will be accessed more often
than others e.g.:
- fresh logs, core datasets, dictionary files
Idea
■ To process it faster, increase
its replication factor while it’s “hot”
■ To save disk space, decrease
its replication factor when it becomes “cold”
Hot Dataset
How to find them?
■ Logs all filesystem access requests sent to the NN
■ Easy to parse and aggregate
- a tab-separated line for each request
HDFS Audit Log
2014-01-18 15:16:12,023
INFO FSNamesystem.audit: allowed=true
ugi=kawaa (auth:SIMPLE) ip=/10.254.28.4
cmd=open
src=/metadata/artist/2013-11-27/part-00061.avro
dst=null perm=null
■ JAR files stored in HDFS and used by Pig scripts
■ A dictionary file with metadata about log messages
■ Core datasets: playlists, users, top tracks
Our Hot Datasets
YARN
MapReduce Jobs
Autotuning
■ There are jobs that we schedule regularly
- e.g. top lists for each country
Idea
■ Before submitting it next time, use statistics from the
previous executions of a job
- To learn about its historical performance
- To tweak its configuration settings
Recurring MapReduce Jobs
We implemented
■ A pre-execution hook that automatically sets
- Maximum size of an input split
- Number of Reduce tasks
■ More settings can be tweaked
- Memory
- Combiner
Jobs Autotuning
■ Here, the goal is that a task runs approx. 10 min, on
average
- Inspired by LinkedIn at Hadoop Summit 2013
- Helpful in extreme cases (short/long running tasks)
A Small PoC ;)
Another Example - Job Optimized Over Time
Even perfect manual settings
may become outdated
when an input dataset grows!
YARN
MapReduce Statistics
■ Extracts the statistics from historical MapReduce jobs
- Supports MRv1 and YARN
■ Stores them as Avro files
- Enables easy analysis using e.g. Pig and Hive
■ Similar projects
- Replephant, hRaven
Zlatanitor = Zlatan + Monitor
Zlatanitor
Low Medium High
A Slow Node
- 40% lower throughput than the average
Low Medium High
NIC negotiated 100MbE
instead of 1GbE
Low Medium High
According to Facebook
■ ”Small percentage of machines are responsible for
large percentage of failures”
- Worse performance
- More alerts
- More manual intervention
Repeat Offenders
Adding nodes to the cluster
increases performance.
Sometimes, removing (crappy) nodes
does too !
Fixing
slow and failing
tasks as well !
YARN
Application Logs
■ YARN - can be moved to HDFS
- They are stored as TFiles … :(
- Small and many of them!
Location Of Application Logs
■ Frequent exceptions and bugs
- Just looking at the last line of stderr shows a lot!
■ Possible optimizations
- Memory and size of map input buffer
What Might Be Checked
a) AttributeError: 'int' object has no attribute 'iteritems'
b) ValueError: invalid literal for int() with base 10: 'spotify'
c) ValueError: Expecting , delimiter: line 1 column 3257 (char 3257)
d) ImportError: No module named db_statistics
YARN
The Capacity Scheduler
■ We specified capacities and elasticity based on a
combination of
- “some” data
- intuition
- desire to shape future usage (!)
Our Initial Capacities
■ Basic information available on the Scheduler Web UI
■ Take print-screens!
- Otherwise, you will lose the history of what you saw :(
Overutilization And Underutilization
■ Capacity Scheduler exposes these metrics via JMX
■ Ganglia does NOT display the metrics related to
utilization of queues (by default)
Visualizing Utilization Of Queue
■ It collects JMX metrics from Java processes
■ It can send metrics to multiple destinations
- Graphite, cacti/rrdtool, Ganglia
- tab-separated text file
- STDOUT
- and more
Jmxtrans
■ Our Production queue often borrows resources
- Usually from the Queue3 and Queue4 queues
Overutilization And Underutilization
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
The Best Time For The Downtime?
Three Crowns
Three Crowns = Sweden
BONUS
Some Cool Stuff
From The Community
■ Aggregates and visualizes Hadoop cluster
utilization across users
LinkedIn's White Elephant
■ Collects run-time statistics from MR jobs
- Stores them in HBase
■ Does not provide built-in visualization layer
- The picture below comes from Twitter's blog
Twitter's hRaven
That’s all!
■ Analyzing Hadoop is also a “business” problem
- Save money
- Iterate faster
- Avoid downtimes
Summary
Thank you!
■ To my awesome colleagues for great technical
review:
Piotr Krewski, Josh Baer, Rafal Wojdyla,
Anna Dackiewicz, Magnus Runesson, Gustav Landén,
Guido Urdaneta, Uldis Barbans
More Thanks
Section name
Questions?
Check out spotify.com/jobs or
@Spotifyjobs for more information
kawaa@spotify.com
Check out my blog: HakunaMapData.com
Want to join the band?
Backup
■ Tricky question!
■ Use production jobs that represent your workload
■ Use a metric that is independent from size of data
that you process
■ Optimize one setting at the time
Benchmarking
Benchmarking
Benchmarking
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)

More Related Content

What's hot (20)

PPT
Hadoop basics
Antonio Silveira
 
PDF
Introduction To Hadoop Ecosystem
InSemble
 
PDF
Introduction to the Hadoop Ecosystem (FrOSCon Edition)
Uwe Printz
 
PDF
20131205 hadoop-hdfs-map reduce-introduction
Xuan-Chao Huang
 
PPT
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hari Shankar Sreekumar
 
PPTX
A Basic Introduction to the Hadoop eco system - no animation
Sameer Tiwari
 
PPTX
Hive at Yahoo: Letters from the trenches
DataWorks Summit
 
PPTX
Hadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
Mithun Radhakrishnan
 
PDF
Hadoop sqoop
Wei-Yu Chen
 
PDF
Hadoop Family and Ecosystem
tcloudcomputing-tw
 
PDF
Tcloud Computing Hadoop Family and Ecosystem Service 2013.Q3
tcloudcomputing-tw
 
PDF
Hadoop Overview kdd2011
Milind Bhandarkar
 
PPTX
Real time hadoop + mapreduce intro
Geoff Hendrey
 
PPTX
The Hadoop Ecosystem
J Singh
 
PDF
Hadoop ecosystem
Stanley Wang
 
PPTX
Pptx present
Nitish Bhardwaj
 
PPTX
Faster Faster Faster! Datamarts with Hive at Yahoo
Mithun Radhakrishnan
 
PDF
HUG August 2010: Best practices
Hadoop User Group
 
PDF
Hadoop ecosystem
Mohamed Ali Mahmoud khouder
 
PPTX
February 2014 HUG : Pig On Tez
Yahoo Developer Network
 
Hadoop basics
Antonio Silveira
 
Introduction To Hadoop Ecosystem
InSemble
 
Introduction to the Hadoop Ecosystem (FrOSCon Edition)
Uwe Printz
 
20131205 hadoop-hdfs-map reduce-introduction
Xuan-Chao Huang
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hari Shankar Sreekumar
 
A Basic Introduction to the Hadoop eco system - no animation
Sameer Tiwari
 
Hive at Yahoo: Letters from the trenches
DataWorks Summit
 
Hadoop Summit 2015: Hive at Yahoo: Letters from the Trenches
Mithun Radhakrishnan
 
Hadoop sqoop
Wei-Yu Chen
 
Hadoop Family and Ecosystem
tcloudcomputing-tw
 
Tcloud Computing Hadoop Family and Ecosystem Service 2013.Q3
tcloudcomputing-tw
 
Hadoop Overview kdd2011
Milind Bhandarkar
 
Real time hadoop + mapreduce intro
Geoff Hendrey
 
The Hadoop Ecosystem
J Singh
 
Hadoop ecosystem
Stanley Wang
 
Pptx present
Nitish Bhardwaj
 
Faster Faster Faster! Datamarts with Hive at Yahoo
Mithun Radhakrishnan
 
HUG August 2010: Best practices
Hadoop User Group
 
Hadoop ecosystem
Mohamed Ali Mahmoud khouder
 
February 2014 HUG : Pig On Tez
Yahoo Developer Network
 

Similar to Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam) (20)

PPTX
Hui 3.0
Arulkumar Arumugam
 
PDF
20080528dublinpt3
Jeff Hammerbacher
 
PDF
Unleash your cluster with YARN
Ferran Galí Reniu
 
PDF
Intro to hadoop tutorial
markgrover
 
PDF
Apache Hadoop In Theory And Practice
Adam Kawa
 
PDF
Next Generation Hadoop Operations
Owen O'Malley
 
PDF
Scaling Hadoop at LinkedIn
DataWorks Summit
 
PDF
Hadoop at datasift
Jairam Chandar
 
PDF
The Evolution of Hadoop at Spotify - Through Failures and Pain
Rafał Wojdyła
 
PPTX
Hadoop operations-2014-strata-new-york-v5
Chris Nauroth
 
PPTX
Managing growth in Production Hadoop Deployments
DataWorks Summit
 
PPTX
Hadoop ppt on the basics and architecture
saipriyacoool
 
PDF
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
PDF
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
Adam Kawa
 
PPTX
Big Data Analytics With Hadoop
Umair Shafique
 
PDF
Harnessing Hadoop and Big Data to Reduce Execution Times
David Tjahjono,MD,MBA(UK)
 
PDF
Facebook - Jonthan Gray - Hadoop World 2010
Cloudera, Inc.
 
PPTX
Analyzing Hadoop Using Hadoop
DataWorks Summit
 
PDF
IOD 2013 - Crunch Big Data in the Cloud with IBM BigInsights and Hadoop
Leons Petražickis
 
PDF
Understanding Hadoop
Ahmed Ossama
 
20080528dublinpt3
Jeff Hammerbacher
 
Unleash your cluster with YARN
Ferran Galí Reniu
 
Intro to hadoop tutorial
markgrover
 
Apache Hadoop In Theory And Practice
Adam Kawa
 
Next Generation Hadoop Operations
Owen O'Malley
 
Scaling Hadoop at LinkedIn
DataWorks Summit
 
Hadoop at datasift
Jairam Chandar
 
The Evolution of Hadoop at Spotify - Through Failures and Pain
Rafał Wojdyła
 
Hadoop operations-2014-strata-new-york-v5
Chris Nauroth
 
Managing growth in Production Hadoop Deployments
DataWorks Summit
 
Hadoop ppt on the basics and architecture
saipriyacoool
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)
Adam Kawa
 
Big Data Analytics With Hadoop
Umair Shafique
 
Harnessing Hadoop and Big Data to Reduce Execution Times
David Tjahjono,MD,MBA(UK)
 
Facebook - Jonthan Gray - Hadoop World 2010
Cloudera, Inc.
 
Analyzing Hadoop Using Hadoop
DataWorks Summit
 
IOD 2013 - Crunch Big Data in the Cloud with IBM BigInsights and Hadoop
Leons Petražickis
 
Understanding Hadoop
Ahmed Ossama
 
Ad

More from Adam Kawa (10)

PDF
Big Data At Spotify
Adam Kawa
 
PPTX
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
Adam Kawa
 
PDF
Apache Hadoop YARN, NameNode HA, HDFS Federation
Adam Kawa
 
PDF
Apache Hadoop Java API
Adam Kawa
 
PDF
Apache Hadoop Ecosystem (based on an exemplary data-driven…
Adam Kawa
 
PDF
Apache Hadoop YARN
Adam Kawa
 
PDF
Data model for analysis of scholarly documents in the MapReduce paradigm
Adam Kawa
 
PDF
Introduction To Apache Pig at WHUG
Adam Kawa
 
PDF
Systemy rekomendacji
Adam Kawa
 
PDF
Introduction To Elastic MapReduce at WHUG
Adam Kawa
 
Big Data At Spotify
Adam Kawa
 
Hadoop Playlist (Ignite talks at Strata + Hadoop World 2013)
Adam Kawa
 
Apache Hadoop YARN, NameNode HA, HDFS Federation
Adam Kawa
 
Apache Hadoop Java API
Adam Kawa
 
Apache Hadoop Ecosystem (based on an exemplary data-driven…
Adam Kawa
 
Apache Hadoop YARN
Adam Kawa
 
Data model for analysis of scholarly documents in the MapReduce paradigm
Adam Kawa
 
Introduction To Apache Pig at WHUG
Adam Kawa
 
Systemy rekomendacji
Adam Kawa
 
Introduction To Elastic MapReduce at WHUG
Adam Kawa
 
Ad

Recently uploaded (20)

PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
Timothy Rottach - Ramp up on AI Use Cases, from Vector Search to AI Agents wi...
AWS Chicago
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PDF
Python basic programing language for automation
DanialHabibi2
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
PDF
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
Blockchain Transactions Explained For Everyone
CIFDAQ
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Timothy Rottach - Ramp up on AI Use Cases, from Vector Search to AI Agents wi...
AWS Chicago
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
Python basic programing language for automation
DanialHabibi2
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
Blockchain Transactions Explained For Everyone
CIFDAQ
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 

Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)

  • 1. Adam Kawa Data Engineer @ Spotify Hadoop Operations Powered By … Hadoop
  • 2. 1. How many times has Coldplay been streamed this month? 2. How many times was “Get Lucky” streamed during first 24h? 3. Who was the most popular artist in NYC last week? Labels, Advertisers, Partners
  • 3. 1. What song to recommend Jay-Z when he wakes up? 2. Is Adam Kawa bored with Coldplay today? 3. How to get Arun to subscribe to Spotify Premium? Data Scientists
  • 5. (Big) Data At Spotify ■ Data generated by +24M monthly active users and for users! - 2.2 TB of compressed data from users per day - 64 TB of data generated in Hadoop each day (triplicated)
  • 6. Data Infrastructure At Spotify ■ Apache Hadoop YARN ■ Many other systems including - Kafka, Cassandra, Storm, Luigi in production - Giraph, Tez, Spark in the evaluation mode
  • 7. ■ Probably the largest commercial Hadoop cluster in Europe! - 694 heterogeneous nodes - 14.25 PB of data consumed - ~12.000 jobs each day Apache Hadoop
  • 9. 1. How many servers do you need to buy to survive one year? 2. What will you do to use them efficiently? 3. If we agree, don’t come back to us this year! OK? Finance Department
  • 10. ■ One of Data Engineers responsible for answering these questions! Adam Kawa
  • 11. ■ Examples of how to analyze various metrics, logs and files - generated by Hadoop - using Hadoop - to understand Hadoop - to avoid guesstimates! The Topic Of This Talk
  • 12. ■ This knowledge can be useful to - measure how fast HDFS is growing - define an empirical retention policy - measure the performance of jobs - optimize the scheduler - and more What To Use It For
  • 13. 1. Analyzing HDFS 2. Analyzing MapReduce and YARN Agenda
  • 15. “ We don’t have any full GC pauses on the NN. Our GC stops the NN for less than 100 msec, on average! :) ” Adam Kawa @ Hadoop User Mailing List December 16th, 2013
  • 16. “ Today, between 12:05 and 13:00 we had 5 full GC pauses on the NN. They stopped the NN for 34min47sec in total! :( ” Adam Kawa @ Spotify office, Stockholm January 13th, 2014
  • 18. The NameNode was receiving the block reports from all the DataNodes Quick Answer!
  • 19. 1. We started the NN when the DNs were running Detailed Answer
  • 20. 1. We started the NN when the DNs were running 2. 502 DNs immediately registered to the NN ■ Within 1.2 sec (based on logs from the DNs) Detailed Answer
  • 21. 1. We started the NN when the DNs were running 2. 502 DNs immediately registered to the NN ■ Within 1.2 sec (based on logs from the DNs) 3. 502 DNs started sending the block reports ■ dfs.blockreport.initialDelay = 30 minutes ■ 17 block reports per minute (on average) ■ +831K blocks in each block report (on average) Detailed Answer
  • 22. 1. We started the NN when the DNs were running 2. 502 DNs immediately registered to the NN ■ Within 1.2 sec (based on logs from the DNs) 3. 502 DNs started sending the block reports ■ dfs.blockreport.initialDelay = 30 minutes ■ 17 block reports per minute (on average) ■ +831K blocks in each block report (on average) 4. This generated a high memory pressure on the NN ■ The NN ran into Full GC !!! Detailed Answer
  • 23. Hadoop told us everything!
  • 24. ■ Enable GC logging for the NameNode ■ Visualize e.g. GCViewer ■ Analyze memory usage patterns, GC pauses, misconfiguration Collecting The GC Stats
  • 25. Time
  • 26. This blue line shows the heap used by the NN
  • 36. CMS collector starts at 98.5% of heap… We fixed that !
  • 37. What happened in HDFS between mid-December 2013 and mid-January 2014?
  • 39. ■ A persistent checkpoint of HDFS metadata ■ It contains information about files + directories ■ A binary file HDFS FsImage File
  • 40. ■ Converts the content of FsImage to text formats - e.g. a tab-separated file or XML ■ Output is easily analyzed by any tools - e.g. Pig, Hive HDFS Offline Image Viewer
  • 41. 50% of the data created during last 3 months
  • 44. 1. NO data added that day 2. Many more files added after
  • 48. ■ An interactive visualization of data in HDFS Twitter's HDFS-DU /app-logs avg. file size = 253 KB no. of dirs = 595K no. of files = 60.6M
  • 49. ■ Statistics broken down by user/group name ■ Candidates for duplicate datasets ■ Inefficient MapReduce jobs - Small files - Skewed files More Uses Of FsImage File
  • 50. ■ You can analyze FsImage to learn how fast HDFS grows ■ You can combine it with “external” datasets - number of daily/monthly active users - total size of logs generated by users - number of queries / day run by data analysts Advanced HDFS Capacity Planning
  • 51. ■ You can also use ''trend button'' in Ganglia Simplified HDFS Capacity Planning If we do NOTHING, we might fill the cluster in September ...
  • 52. What will we do to survive longer than September?
  • 54. Question How many days after creation, a dataset is not accessed anymore? Retention Policy
  • 55. Question How many days after creation, a dataset is not accessed anymore? Possible Solution ■ You can use modification_time and access_time from FsImage Empirical Retention Policy
  • 56. ■ Logs and core datasets are accessed even many years after creation ■ Many reports are not accessed even a hour after creation ■ Most intermediate datasets needed less than a week ■ 10% of data has not been accessed for a year Our Retention Facts
  • 58. ■ Some files/directories will be accessed more often than others e.g.: - fresh logs, core datasets, dictionary files Idea ■ To process it faster, increase its replication factor while it’s “hot” ■ To save disk space, decrease its replication factor when it becomes “cold” Hot Dataset
  • 59. How to find them?
  • 60. ■ Logs all filesystem access requests sent to the NN ■ Easy to parse and aggregate - a tab-separated line for each request HDFS Audit Log 2014-01-18 15:16:12,023 INFO FSNamesystem.audit: allowed=true ugi=kawaa (auth:SIMPLE) ip=/10.254.28.4 cmd=open src=/metadata/artist/2013-11-27/part-00061.avro dst=null perm=null
  • 61. ■ JAR files stored in HDFS and used by Pig scripts ■ A dictionary file with metadata about log messages ■ Core datasets: playlists, users, top tracks Our Hot Datasets
  • 63. ■ There are jobs that we schedule regularly - e.g. top lists for each country Idea ■ Before submitting it next time, use statistics from the previous executions of a job - To learn about its historical performance - To tweak its configuration settings Recurring MapReduce Jobs
  • 64. We implemented ■ A pre-execution hook that automatically sets - Maximum size of an input split - Number of Reduce tasks ■ More settings can be tweaked - Memory - Combiner Jobs Autotuning
  • 65. ■ Here, the goal is that a task runs approx. 10 min, on average - Inspired by LinkedIn at Hadoop Summit 2013 - Helpful in extreme cases (short/long running tasks) A Small PoC ;)
  • 66. Another Example - Job Optimized Over Time
  • 67. Even perfect manual settings may become outdated when an input dataset grows!
  • 69. ■ Extracts the statistics from historical MapReduce jobs - Supports MRv1 and YARN ■ Stores them as Avro files - Enables easy analysis using e.g. Pig and Hive ■ Similar projects - Replephant, hRaven Zlatanitor = Zlatan + Monitor Zlatanitor
  • 71. A Slow Node - 40% lower throughput than the average Low Medium High
  • 72. NIC negotiated 100MbE instead of 1GbE Low Medium High
  • 73. According to Facebook ■ ”Small percentage of machines are responsible for large percentage of failures” - Worse performance - More alerts - More manual intervention Repeat Offenders
  • 74. Adding nodes to the cluster increases performance. Sometimes, removing (crappy) nodes does too !
  • 77. ■ YARN - can be moved to HDFS - They are stored as TFiles … :( - Small and many of them! Location Of Application Logs
  • 78. ■ Frequent exceptions and bugs - Just looking at the last line of stderr shows a lot! ■ Possible optimizations - Memory and size of map input buffer What Might Be Checked a) AttributeError: 'int' object has no attribute 'iteritems' b) ValueError: invalid literal for int() with base 10: 'spotify' c) ValueError: Expecting , delimiter: line 1 column 3257 (char 3257) d) ImportError: No module named db_statistics
  • 80. ■ We specified capacities and elasticity based on a combination of - “some” data - intuition - desire to shape future usage (!) Our Initial Capacities
  • 81. ■ Basic information available on the Scheduler Web UI ■ Take print-screens! - Otherwise, you will lose the history of what you saw :( Overutilization And Underutilization
  • 82. ■ Capacity Scheduler exposes these metrics via JMX ■ Ganglia does NOT display the metrics related to utilization of queues (by default) Visualizing Utilization Of Queue
  • 83. ■ It collects JMX metrics from Java processes ■ It can send metrics to multiple destinations - Graphite, cacti/rrdtool, Ganglia - tab-separated text file - STDOUT - and more Jmxtrans
  • 84. ■ Our Production queue often borrows resources - Usually from the Queue3 and Queue4 queues Overutilization And Underutilization
  • 86. The Best Time For The Downtime?
  • 88. Three Crowns = Sweden
  • 89. BONUS Some Cool Stuff From The Community
  • 90. ■ Aggregates and visualizes Hadoop cluster utilization across users LinkedIn's White Elephant
  • 91. ■ Collects run-time statistics from MR jobs - Stores them in HBase ■ Does not provide built-in visualization layer - The picture below comes from Twitter's blog Twitter's hRaven
  • 93. ■ Analyzing Hadoop is also a “business” problem - Save money - Iterate faster - Avoid downtimes Summary
  • 95. ■ To my awesome colleagues for great technical review: Piotr Krewski, Josh Baer, Rafal Wojdyla, Anna Dackiewicz, Magnus Runesson, Gustav Landén, Guido Urdaneta, Uldis Barbans More Thanks
  • 97. Check out spotify.com/jobs or @Spotifyjobs for more information [email protected] Check out my blog: HakunaMapData.com Want to join the band?
  • 99. ■ Tricky question! ■ Use production jobs that represent your workload ■ Use a metric that is independent from size of data that you process ■ Optimize one setting at the time Benchmarking