SlideShare a Scribd company logo
Top 5 Mistakes when writing
Spark applications
Mark Grover | @mark_grover | Software Engineer
Ted Malaska | @TedMalaska | Principal Solutions Architect
tiny.cloudera.com/spark-mistakes
About the book
• @hadooparchbook
• hadooparchitecturebook.com
• github.com/hadooparchitecturebook
• slideshare.com/hadooparchbook
Mistakes people make
when using Spark
Mistakes people we made
when using Spark
Mistake # 1
# Executors, cores, memory !?!
• 6 Nodes
• 16 cores each
• 64 GB of RAM each
Decisions, decisions, decisions
• Number of executors (--num-executors)
• Cores for each executor (--executor-cores)
• Memory for each executor (--executor-
memory)
• 6 nodes
• 16 cores each
• 64 GB of RAM
Spark Architecture recap
Answer #1 – Most granular
• Have smallest sized executors as possible
• 1 core each
• Total of 16 x 6 = 96 cores
• 96 executors
• 64/16 = 4 GB per executor (per node)
Answer #1 – Most granular
• Have smallest sized executors as possible
• 1 core each
• Total of 16 x 6 = 96 cores
• 96 executors
• 64/16 = 4 GB per executor (per node)
Why?
• Not using benefits of running multiple
tasks in same JVM
Answer #2 – Least granular
• 6 executors
• 64 GB memory each
• 16 cores each
Answer #2 – Least granular
• 6 executors
• 64 GB memory each
• 16 cores each
Why?
• Need to leave some memory overhead for
OS/Hadoop daemons
Answer #3 – with overhead
• 6 executors
• 63 GB memory each
• 15 cores each
Answer #3 – with overhead
• 6 executors
• 63 GB memory each
• 15 cores each
Spark on YARN – Memory usage
• --executor-memory controls the heap size
• Need some overhead (controlled by
spark.yarn.executor.memory.overhead)for off heap memory
• Default is max(384MB, .07 * spark.executor.memory)
YARN AM needs a core: Client
mode
YARN AM needs a core: Cluster
mode
HDFS Throughput
• 15 cores per executor can lead to bad
HDFS I/O throughput.
• Best is to keep under 5 cores per executor
Calculations
• 5 cores per executor
– For max HDFS throughput
• Cluster has 6 * 15 = 90 cores in total (after taking out
Hadoop/Yarn daemon cores)
• 90 cores / 5 cores/executor = 18 executors
• 1 executor for AM => 17 executors
• Each node has 3 executors
• 63 GB/3 = 21 GB, 21 x (1-0.07) ~ 19 GB (counting off
heap overhead)
Correct answer
• 17 executors
• 19 GB memory each
• 5 cores each
* Not etched in stone
Read more
• From a great blog post on this topic by
Sandy Ryza:
http://blog.cloudera.com/blog/2015/03/how-
to-tune-your-apache-spark-jobs-part-2/
Mistake # 2
Application failure
15/04/16 14:13:03 WARN scheduler.TaskSetManager: Lost task 19.0 in
stage 6.0 (TID 120, 10.215.149.47):
java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828) at
org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123) at
org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132) at
org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:51
7) at
org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:432)
at org.apache.spark.storage.BlockManager.get(BlockManager.scala:618)
at
org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:146
) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
Why?
• No Spark shuffle block can be greater than
2 GB
Ok, what’s a shuffle block again?
• In MapReduce terminology, a Mapper-
Reducer pair – the file from local disk that
the reducers read from local disk in
MapReduce.
In other words
Each yellow arrow
in this diagram
represents a
shuffle block.
Wait! What!?! This is Big Data stuff,
no?
• Yeah! Nope!
• Spark uses ByteBuffer as abstraction
for storing blocks
val buf = ByteBuffer.allocate(length.toInt)
• ByteBuffer is limited by Integer.MAX_SIZE(2 GB)!
Once again
• No Spark shuffle block can be greater than
2 GB
Spark SQL
• Especially problematic for Spark SQL
• Default number of partitions to use when
doing shuffles is 200
– This low number of partitions leads to high
shuffle block size
Umm, ok, so what can I do?
1. Increase the number of partitions
– Thereby, reducing the average partition size
2. Get rid of skew in your data
– More on that later
Umm, how exactly?
• In Spark SQL, increase the value of
spark.sql.shuffle.partitions
• In regular Spark applications, use
rdd.repartition() or
rdd.coalesce()
But, how many partitions should I
have?
• Rule of thumb is around 128 MB per
partition
But!
• Spark uses a different data structure for
bookkeeping during shuffles, when the
number of partitions is less than 2000, vs.
more than 2000.
Don’t believe me?
• In MapStatus.scala
def apply(loc: BlockManagerId, uncompressedSizes:
Array[Long]): MapStatus = {
if (uncompressedSizes.length > 2000) {
HighlyCompressedMapStatus(loc,
uncompressedSizes)
} else {
new CompressedMapStatus(loc, uncompressedSizes)
}
}
Ok, so what are you saying?
• If your number of partitions is less than
2000, but close enough to it, bump that
number up to be slightly higher than 2000.
Can you summarize, please?
• Don’t have too big partitions
– Your job will fail due to 2 GB limit
• Don’t have too few partitions
– Your job will be slow, not making using of
parallelism
• Rule of thumb: ~128 MB per partition
• If #partitions < 2000, but close, bump to just > 2000
Mistake # 3
Slow jobs on Join/Shuffle
• Your dataset takes 20 seconds to run over
with a map job, but take 4 hours when
joined or shuffled. What wrong?
Skew and Cartesian
Mistake - Skew
Single Thread
Single Thread
Single Thread
Single Thread
Single Thread
Single Thread
Single Thread
Normal
Distributed
The Holy Grail of Distributed Systems
Mistake - Skew
Single Thread
Normal
Distributed
What about Skew, because that is a thing
Mistake – Skew : Answers
• Salting
• Isolation Salting
• Isolation Map Joins
Mistake – Skew : Salting
• Normal Key: “Foo”
• Salted Key: “Foo” +
random.nextInt(saltFactor)
Managing Parallelism
Mistake – Skew: Salting
Add Example Slide
Mistake – Skew : Salting
• Two Stage Aggregation
– Stage one to do operations on the salted keys
– Stage two to do operation access unsalted
key results
Data Source Map
Convert to
Salted Key & Value
Tuple
Reduce
By Salted Key
Map Convert
results to
Key & Value
Tuple
Reduce
By Key
Results
Mistake – Skew : Isolated Salting
• Second Stage only required for Isolated
Keys
Data Source Map
Convert to
Key & Value
Isolate Key and
convert to
Salted Key &
Value
Tuple
Reduce
By Key &
Salted Key
Filter Isolated
Keys
From Salted
Keys
Map Convert
results to
Key & Value
Tuple
Reduce
By Key
Union to
Results
Mistake – Skew : Isolated Map Join
• Filter Out Isolated Keys and use Map
Join/Aggregate on those
• And normal reduce on the rest of the data
• This can remove a large amount of data being
shuffled
Data Source Filter Normal
Keys
From Isolated
Keys
Reduce
By Normal Key
Union to
Results
Map Join
For Isolated
Keys
Managing Parallelism
Cartesian Join
Map Task
Shuffle Tmp 1
Shuffle Tmp 2
Shuffle Tmp 3
Shuffle Tmp 4
Map Task
Shuffle Tmp 1
Shuffle Tmp 2
Shuffle Tmp 3
Shuffle Tmp 4
Map Task
Shuffle Tmp 1
Shuffle Tmp 2
Shuffle Tmp 3
Shuffle Tmp 4
ReduceTask
ReduceTask
ReduceTask
ReduceTask
Amount
of Data
Amount of Data
10x
100x
1000x
10000x
100000x
1000000x
Or more
Managing Parallelism
• To fight Cartesian Join
– Nested Structures
– Windowing
– Skip Steps
Mistake # 4
Out of luck?
• Do you every run out of memory?
• Do you every have more then 20 stages?
• Is your driver doing a lot of work?
Mistake – DAG Management
• Shuffles are to be avoided
• ReduceByKey over GroupByKey
• TreeReduce over Reduce
• Use Complex Types
Mistake – DAG Management:
Shuffles
• Map Side Reducing if possible
• Think about partitioning/bucketing ahead of
time
• Do as much as possible with a single
Shuffle
• Only send what you have to send
• Avoid Skew and Cartesians
ReduceByKey over GroupByKey
• ReduceByKey can do almost anything that
GroupByKey can do
• Aggregations
• Windowing
• Use memory
• But you have more control
• ReduceByKey has a fixed limit of Memory
requirements
• GroupByKey is unbound and dependent of the
data
TreeReduce over Reduce
• TreeReduce & Reduce returns a result to the driver
• TreeReduce does more work on the executors
• Where Reduce bring everything back to the driver
Partition
Partition
Partition
Partition
Driver
100%
Partition
Partition
Partition
Partition
Driver
4
25%
25%
25%
25%
Complex Types
• Top N List
• Multiple types of Aggregations
• Windowing operations
• All in one pass
Complex Types
• Think outside of the box use objects to reduce by
• (Make something simple)
Mistake # 5
Ever seen this?
Exception in thread "main" java.lang.NoSuchMethodError:
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at org.apache.spark.util.collection.OpenHashSet.org
$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at…....
But!
• I already included guava in my app’s
maven dependencies?
Ah!
• My guava version doesn’t match with
Spark’s guava version!
Shading
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.2</version>
...
<relocations>
<relocation>
<pattern>com.google.protobuf</pattern>
<shadedPattern>com.company.my.protobuf</shadedPattern>
</relocation>
</relocations>
Summary
5 Mistakes
• Size up your executors right
• 2 GB limit on Spark shuffle blocks
• Evil thing about skew and cartesians
• Learn to manage your DAG, yo!
• Do shady stuff, don’t let classpath leaks
mess you up
THANK YOU.
tiny.cloudera.com/spark-mistakes
Mark Grover | @mark_grover
Ted Malaska | @TedMalaska

More Related Content

PDF
Apache Spark At Scale in the Cloud
Databricks
 
PDF
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Databricks
 
PDF
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Yongho Ha
 
PDF
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
PPTX
Autoscaling Flink with Reactive Mode
Flink Forward
 
PDF
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
Edureka!
 
PDF
Apache Spark Core – Practical Optimization
Databricks
 
PPTX
The Impala Cookbook
Cloudera, Inc.
 
Apache Spark At Scale in the Cloud
Databricks
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Databricks
 
Spark 의 핵심은 무엇인가? RDD! (RDD paper review)
Yongho Ha
 
Introducing the Apache Flink Kubernetes Operator
Flink Forward
 
Autoscaling Flink with Reactive Mode
Flink Forward
 
What is Apache Spark | Apache Spark Tutorial For Beginners | Apache Spark Tra...
Edureka!
 
Apache Spark Core – Practical Optimization
Databricks
 
The Impala Cookbook
Cloudera, Inc.
 

What's hot (20)

PDF
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Databricks
 
PPTX
Presto overview
Shixiong Zhu
 
PDF
Understanding Memory Management In Spark For Fun And Profit
Spark Summit
 
PDF
Improving PySpark performance: Spark Performance Beyond the JVM
Holden Karau
 
PDF
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Databricks
 
PDF
Enabling Vectorized Engine in Apache Spark
Kazuaki Ishizaki
 
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
PDF
Batch Processing at Scale with Flink & Iceberg
Flink Forward
 
PPTX
Splunk Search Optimization
Splunk
 
PDF
B+Tree Indexes and InnoDB
Ovais Tariq
 
PDF
Java Performance Analysis on Linux with Flame Graphs
Brendan Gregg
 
PDF
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Cloudera, Inc.
 
PPTX
Practical learnings from running thousands of Flink jobs
Flink Forward
 
PPTX
Apache Tez - A New Chapter in Hadoop Data Processing
DataWorks Summit
 
PPTX
Apache Calcite overview
Julian Hyde
 
PDF
Introduction to Redis
Dvir Volk
 
PDF
Spark shuffle introduction
colorant
 
PDF
Flink Forward San Francisco 2019: How to Join Two Data Streams? - Piotr Nowojski
Flink Forward
 
PDF
Dynamic Partition Pruning in Apache Spark
Databricks
 
PPTX
Apache Spark Fundamentals
Zahra Eskandari
 
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Databricks
 
Presto overview
Shixiong Zhu
 
Understanding Memory Management In Spark For Fun And Profit
Spark Summit
 
Improving PySpark performance: Spark Performance Beyond the JVM
Holden Karau
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Databricks
 
Enabling Vectorized Engine in Apache Spark
Kazuaki Ishizaki
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
Batch Processing at Scale with Flink & Iceberg
Flink Forward
 
Splunk Search Optimization
Splunk
 
B+Tree Indexes and InnoDB
Ovais Tariq
 
Java Performance Analysis on Linux with Flame Graphs
Brendan Gregg
 
Top 5 Mistakes to Avoid When Writing Apache Spark Applications
Cloudera, Inc.
 
Practical learnings from running thousands of Flink jobs
Flink Forward
 
Apache Tez - A New Chapter in Hadoop Data Processing
DataWorks Summit
 
Apache Calcite overview
Julian Hyde
 
Introduction to Redis
Dvir Volk
 
Spark shuffle introduction
colorant
 
Flink Forward San Francisco 2019: How to Join Two Data Streams? - Piotr Nowojski
Flink Forward
 
Dynamic Partition Pruning in Apache Spark
Databricks
 
Apache Spark Fundamentals
Zahra Eskandari
 
Ad

Viewers also liked (20)

PDF
Enhancements on Spark SQL optimizer by Min Qiu
Spark Summit
 
PPTX
Tuning and Debugging in Apache Spark
Patrick Wendell
 
PDF
Top 5 Mistakes When Writing Spark Applications
Spark Summit
 
PDF
Operational Tips for Deploying Spark by Miklos Christine
Spark Summit
 
PDF
Spark 2.x Troubleshooting Guide
IBM
 
PPTX
Optimizing Apache Spark SQL Joins
Databricks
 
PPTX
Tuning tips for Apache Spark Jobs
Samir Bessalah
 
PDF
Succinct Spark: Fast Interactive Queries on Compressed RDDs by Rachit Agarwal
Spark Summit
 
PDF
Beyond Parallelize and Collect by Holden Karau
Spark Summit
 
PDF
Top 5 mistakes when writing Spark applications
hadooparchbook
 
PDF
Getting The Best Performance With PySpark
Spark Summit
 
PDF
Why your Spark job is failing
Sandy Ryza
 
PDF
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
Spark Summit
 
PPTX
Choosing an HDFS data storage format- Avro vs. Parquet and more - StampedeCon...
StampedeCon
 
PPTX
Why your Spark Job is Failing
DataWorks Summit
 
PDF
SparkSQL: A Compiler from Queries to RDDs
Databricks
 
PDF
Spark as the Gateway Drug to Typed Functional Programming: Spark Summit East ...
Spark Summit
 
PDF
Time-evolving Graph Processing on Commodity Clusters: Spark Summit East talk ...
Spark Summit
 
PDF
RISELab: Enabling Intelligent Real-Time Decisions keynote by Ion Stoica
Spark Summit
 
PDF
Bulletproof Jobs: Patterns for Large-Scale Spark Processing: Spark Summit Eas...
Spark Summit
 
Enhancements on Spark SQL optimizer by Min Qiu
Spark Summit
 
Tuning and Debugging in Apache Spark
Patrick Wendell
 
Top 5 Mistakes When Writing Spark Applications
Spark Summit
 
Operational Tips for Deploying Spark by Miklos Christine
Spark Summit
 
Spark 2.x Troubleshooting Guide
IBM
 
Optimizing Apache Spark SQL Joins
Databricks
 
Tuning tips for Apache Spark Jobs
Samir Bessalah
 
Succinct Spark: Fast Interactive Queries on Compressed RDDs by Rachit Agarwal
Spark Summit
 
Beyond Parallelize and Collect by Holden Karau
Spark Summit
 
Top 5 mistakes when writing Spark applications
hadooparchbook
 
Getting The Best Performance With PySpark
Spark Summit
 
Why your Spark job is failing
Sandy Ryza
 
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
Spark Summit
 
Choosing an HDFS data storage format- Avro vs. Parquet and more - StampedeCon...
StampedeCon
 
Why your Spark Job is Failing
DataWorks Summit
 
SparkSQL: A Compiler from Queries to RDDs
Databricks
 
Spark as the Gateway Drug to Typed Functional Programming: Spark Summit East ...
Spark Summit
 
Time-evolving Graph Processing on Commodity Clusters: Spark Summit East talk ...
Spark Summit
 
RISELab: Enabling Intelligent Real-Time Decisions keynote by Ion Stoica
Spark Summit
 
Bulletproof Jobs: Patterns for Large-Scale Spark Processing: Spark Summit Eas...
Spark Summit
 
Ad

Similar to Top 5 Mistakes When Writing Spark Applications by Mark Grover and Ted Malaska (20)

PDF
Top 5 mistakes when writing Spark applications
markgrover
 
PDF
Top 5 mistakes when writing Spark applications
markgrover
 
PDF
Top 5 mistakes when writing Spark applications
hadooparchbook
 
PPTX
Spark Tips & Tricks
Jason Hubbard
 
PDF
Colvin exadata mistakes_ioug_2014
marvin herrera
 
PDF
Apache Spark At Scale in the Cloud
Rose Toomey
 
PDF
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
PDF
Redis trouble shooting_eng
DaeMyung Kang
 
PDF
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
Databricks
 
PPTX
Chicago spark meetup-april2017-public
Guru Dharmateja Medasani
 
PDF
What every developer should know about database scalability, PyCon 2010
jbellis
 
KEY
Writing Scalable Software in Java
Ruben Badaró
 
PPTX
Spark architechure.pptx
SaiSriMadhuriYatam
 
PDF
Tuning Linux for your database FLOSSUK 2016
Colin Charles
 
PDF
Scaling with sync_replication using Galera and EC2
Marco Tusa
 
PDF
Cassandra Core Concepts - Cassandra Day Toronto
Jon Haddad
 
PDF
Dongwon Kim – A Comparative Performance Evaluation of Flink
Flink Forward
 
PPTX
A Comparative Performance Evaluation of Apache Flink
Dongwon Kim
 
PPTX
Spark Overview and Performance Issues
Antonios Katsarakis
 
PPTX
Scylla Summit 2018: Make Scylla Fast Again! Find out how using Tools, Talent,...
ScyllaDB
 
Top 5 mistakes when writing Spark applications
markgrover
 
Top 5 mistakes when writing Spark applications
markgrover
 
Top 5 mistakes when writing Spark applications
hadooparchbook
 
Spark Tips & Tricks
Jason Hubbard
 
Colvin exadata mistakes_ioug_2014
marvin herrera
 
Apache Spark At Scale in the Cloud
Rose Toomey
 
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
Redis trouble shooting_eng
DaeMyung Kang
 
Migrating ETL Workflow to Apache Spark at Scale in Pinterest
Databricks
 
Chicago spark meetup-april2017-public
Guru Dharmateja Medasani
 
What every developer should know about database scalability, PyCon 2010
jbellis
 
Writing Scalable Software in Java
Ruben Badaró
 
Spark architechure.pptx
SaiSriMadhuriYatam
 
Tuning Linux for your database FLOSSUK 2016
Colin Charles
 
Scaling with sync_replication using Galera and EC2
Marco Tusa
 
Cassandra Core Concepts - Cassandra Day Toronto
Jon Haddad
 
Dongwon Kim – A Comparative Performance Evaluation of Flink
Flink Forward
 
A Comparative Performance Evaluation of Apache Flink
Dongwon Kim
 
Spark Overview and Performance Issues
Antonios Katsarakis
 
Scylla Summit 2018: Make Scylla Fast Again! Find out how using Tools, Talent,...
ScyllaDB
 

More from Spark Summit (20)

PDF
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
Spark Summit
 
PDF
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
Spark Summit
 
PDF
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Spark Summit
 
PDF
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Spark Summit
 
PDF
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
Spark Summit
 
PDF
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
Spark Summit
 
PDF
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
PDF
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
PDF
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
Spark Summit
 
PDF
Next CERN Accelerator Logging Service with Jakub Wozniak
Spark Summit
 
PDF
Powering a Startup with Apache Spark with Kevin Kim
Spark Summit
 
PDF
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Spark Summit
 
PDF
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Spark Summit
 
PDF
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
Spark Summit
 
PDF
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spark Summit
 
PDF
Goal Based Data Production with Sim Simeonov
Spark Summit
 
PDF
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Spark Summit
 
PDF
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Spark Summit
 
PDF
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Spark Summit
 
PDF
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
Spark Summit
 
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
Spark Summit
 
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Spark Summit
 
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Spark Summit
 
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
Spark Summit
 
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
Spark Summit
 
Next CERN Accelerator Logging Service with Jakub Wozniak
Spark Summit
 
Powering a Startup with Apache Spark with Kevin Kim
Spark Summit
 
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Spark Summit
 
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Spark Summit
 
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
Spark Summit
 
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spark Summit
 
Goal Based Data Production with Sim Simeonov
Spark Summit
 
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Spark Summit
 
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Spark Summit
 
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Spark Summit
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 

Recently uploaded (20)

PPTX
INFO8116 -Big data architecture and analytics
guddipatel10
 
PDF
TIC ACTIVIDAD 1geeeeeeeeeeeeeeeeeeeeeeeeeeeeeer3.pdf
Thais Ruiz
 
PDF
Research about a FoodFolio app for personalized dietary tracking and health o...
AustinLiamAndres
 
PDF
An Uncut Conversation With Grok | PDF Document
Mike Hydes
 
PPTX
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
PPTX
Introduction to Data Analytics and Data Science
KavithaCIT
 
PDF
Blue Futuristic Cyber Security Presentation.pdf
tanvikhunt1003
 
PPTX
INFO8116 - Week 10 - Slides.pptx data analutics
guddipatel10
 
PDF
Chad Readey - An Independent Thinker
Chad Readey
 
PPT
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
PPTX
IP_Journal_Articles_2025IP_Journal_Articles_2025
mishell212144
 
PPTX
INFO8116 - Week 10 - Slides.pptx big data architecture
guddipatel10
 
PDF
Classifcation using Machine Learning and deep learning
bhaveshagrawal35
 
PPTX
short term internship project on Data visualization
JMJCollegeComputerde
 
PDF
The_Future_of_Data_Analytics_by_CA_Suvidha_Chaplot_UPDATED.pdf
CA Suvidha Chaplot
 
PPTX
Future_of_AI_Presentation for everyone.pptx
boranamanju07
 
PPTX
Introduction-to-Python-Programming-Language (1).pptx
dhyeysapariya
 
PDF
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
PPTX
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
PPTX
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 
INFO8116 -Big data architecture and analytics
guddipatel10
 
TIC ACTIVIDAD 1geeeeeeeeeeeeeeeeeeeeeeeeeeeeeer3.pdf
Thais Ruiz
 
Research about a FoodFolio app for personalized dietary tracking and health o...
AustinLiamAndres
 
An Uncut Conversation With Grok | PDF Document
Mike Hydes
 
Data-Driven Machine Learning for Rail Infrastructure Health Monitoring
Sione Palu
 
Introduction to Data Analytics and Data Science
KavithaCIT
 
Blue Futuristic Cyber Security Presentation.pdf
tanvikhunt1003
 
INFO8116 - Week 10 - Slides.pptx data analutics
guddipatel10
 
Chad Readey - An Independent Thinker
Chad Readey
 
Real Life Application of Set theory, Relations and Functions
manavparmar205
 
IP_Journal_Articles_2025IP_Journal_Articles_2025
mishell212144
 
INFO8116 - Week 10 - Slides.pptx big data architecture
guddipatel10
 
Classifcation using Machine Learning and deep learning
bhaveshagrawal35
 
short term internship project on Data visualization
JMJCollegeComputerde
 
The_Future_of_Data_Analytics_by_CA_Suvidha_Chaplot_UPDATED.pdf
CA Suvidha Chaplot
 
Future_of_AI_Presentation for everyone.pptx
boranamanju07
 
Introduction-to-Python-Programming-Language (1).pptx
dhyeysapariya
 
717629748-Databricks-Certified-Data-Engineer-Professional-Dumps-by-Ball-21-03...
pedelli41
 
short term project on AI Driven Data Analytics
JMJCollegeComputerde
 
World-population.pptx fire bunberbpeople
umutunsalnsl4402
 

Top 5 Mistakes When Writing Spark Applications by Mark Grover and Ted Malaska

  • 1. Top 5 Mistakes when writing Spark applications Mark Grover | @mark_grover | Software Engineer Ted Malaska | @TedMalaska | Principal Solutions Architect tiny.cloudera.com/spark-mistakes
  • 2. About the book • @hadooparchbook • hadooparchitecturebook.com • github.com/hadooparchitecturebook • slideshare.com/hadooparchbook
  • 4. Mistakes people we made when using Spark
  • 6. # Executors, cores, memory !?! • 6 Nodes • 16 cores each • 64 GB of RAM each
  • 7. Decisions, decisions, decisions • Number of executors (--num-executors) • Cores for each executor (--executor-cores) • Memory for each executor (--executor- memory) • 6 nodes • 16 cores each • 64 GB of RAM
  • 9. Answer #1 – Most granular • Have smallest sized executors as possible • 1 core each • Total of 16 x 6 = 96 cores • 96 executors • 64/16 = 4 GB per executor (per node)
  • 10. Answer #1 – Most granular • Have smallest sized executors as possible • 1 core each • Total of 16 x 6 = 96 cores • 96 executors • 64/16 = 4 GB per executor (per node)
  • 11. Why? • Not using benefits of running multiple tasks in same JVM
  • 12. Answer #2 – Least granular • 6 executors • 64 GB memory each • 16 cores each
  • 13. Answer #2 – Least granular • 6 executors • 64 GB memory each • 16 cores each
  • 14. Why? • Need to leave some memory overhead for OS/Hadoop daemons
  • 15. Answer #3 – with overhead • 6 executors • 63 GB memory each • 15 cores each
  • 16. Answer #3 – with overhead • 6 executors • 63 GB memory each • 15 cores each
  • 17. Spark on YARN – Memory usage • --executor-memory controls the heap size • Need some overhead (controlled by spark.yarn.executor.memory.overhead)for off heap memory • Default is max(384MB, .07 * spark.executor.memory)
  • 18. YARN AM needs a core: Client mode
  • 19. YARN AM needs a core: Cluster mode
  • 20. HDFS Throughput • 15 cores per executor can lead to bad HDFS I/O throughput. • Best is to keep under 5 cores per executor
  • 21. Calculations • 5 cores per executor – For max HDFS throughput • Cluster has 6 * 15 = 90 cores in total (after taking out Hadoop/Yarn daemon cores) • 90 cores / 5 cores/executor = 18 executors • 1 executor for AM => 17 executors • Each node has 3 executors • 63 GB/3 = 21 GB, 21 x (1-0.07) ~ 19 GB (counting off heap overhead)
  • 22. Correct answer • 17 executors • 19 GB memory each • 5 cores each * Not etched in stone
  • 23. Read more • From a great blog post on this topic by Sandy Ryza: http://blog.cloudera.com/blog/2015/03/how- to-tune-your-apache-spark-jobs-part-2/
  • 25. Application failure 15/04/16 14:13:03 WARN scheduler.TaskSetManager: Lost task 19.0 in stage 6.0 (TID 120, 10.215.149.47): java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828) at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123) at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132) at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:51 7) at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:432) at org.apache.spark.storage.BlockManager.get(BlockManager.scala:618) at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:146 ) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
  • 26. Why? • No Spark shuffle block can be greater than 2 GB
  • 27. Ok, what’s a shuffle block again? • In MapReduce terminology, a Mapper- Reducer pair – the file from local disk that the reducers read from local disk in MapReduce.
  • 28. In other words Each yellow arrow in this diagram represents a shuffle block.
  • 29. Wait! What!?! This is Big Data stuff, no? • Yeah! Nope! • Spark uses ByteBuffer as abstraction for storing blocks val buf = ByteBuffer.allocate(length.toInt) • ByteBuffer is limited by Integer.MAX_SIZE(2 GB)!
  • 30. Once again • No Spark shuffle block can be greater than 2 GB
  • 31. Spark SQL • Especially problematic for Spark SQL • Default number of partitions to use when doing shuffles is 200 – This low number of partitions leads to high shuffle block size
  • 32. Umm, ok, so what can I do? 1. Increase the number of partitions – Thereby, reducing the average partition size 2. Get rid of skew in your data – More on that later
  • 33. Umm, how exactly? • In Spark SQL, increase the value of spark.sql.shuffle.partitions • In regular Spark applications, use rdd.repartition() or rdd.coalesce()
  • 34. But, how many partitions should I have? • Rule of thumb is around 128 MB per partition
  • 35. But! • Spark uses a different data structure for bookkeeping during shuffles, when the number of partitions is less than 2000, vs. more than 2000.
  • 36. Don’t believe me? • In MapStatus.scala def apply(loc: BlockManagerId, uncompressedSizes: Array[Long]): MapStatus = { if (uncompressedSizes.length > 2000) { HighlyCompressedMapStatus(loc, uncompressedSizes) } else { new CompressedMapStatus(loc, uncompressedSizes) } }
  • 37. Ok, so what are you saying? • If your number of partitions is less than 2000, but close enough to it, bump that number up to be slightly higher than 2000.
  • 38. Can you summarize, please? • Don’t have too big partitions – Your job will fail due to 2 GB limit • Don’t have too few partitions – Your job will be slow, not making using of parallelism • Rule of thumb: ~128 MB per partition • If #partitions < 2000, but close, bump to just > 2000
  • 40. Slow jobs on Join/Shuffle • Your dataset takes 20 seconds to run over with a map job, but take 4 hours when joined or shuffled. What wrong?
  • 42. Mistake - Skew Single Thread Single Thread Single Thread Single Thread Single Thread Single Thread Single Thread Normal Distributed The Holy Grail of Distributed Systems
  • 43. Mistake - Skew Single Thread Normal Distributed What about Skew, because that is a thing
  • 44. Mistake – Skew : Answers • Salting • Isolation Salting • Isolation Map Joins
  • 45. Mistake – Skew : Salting • Normal Key: “Foo” • Salted Key: “Foo” + random.nextInt(saltFactor)
  • 47. Mistake – Skew: Salting
  • 49. Mistake – Skew : Salting • Two Stage Aggregation – Stage one to do operations on the salted keys – Stage two to do operation access unsalted key results Data Source Map Convert to Salted Key & Value Tuple Reduce By Salted Key Map Convert results to Key & Value Tuple Reduce By Key Results
  • 50. Mistake – Skew : Isolated Salting • Second Stage only required for Isolated Keys Data Source Map Convert to Key & Value Isolate Key and convert to Salted Key & Value Tuple Reduce By Key & Salted Key Filter Isolated Keys From Salted Keys Map Convert results to Key & Value Tuple Reduce By Key Union to Results
  • 51. Mistake – Skew : Isolated Map Join • Filter Out Isolated Keys and use Map Join/Aggregate on those • And normal reduce on the rest of the data • This can remove a large amount of data being shuffled Data Source Filter Normal Keys From Isolated Keys Reduce By Normal Key Union to Results Map Join For Isolated Keys
  • 52. Managing Parallelism Cartesian Join Map Task Shuffle Tmp 1 Shuffle Tmp 2 Shuffle Tmp 3 Shuffle Tmp 4 Map Task Shuffle Tmp 1 Shuffle Tmp 2 Shuffle Tmp 3 Shuffle Tmp 4 Map Task Shuffle Tmp 1 Shuffle Tmp 2 Shuffle Tmp 3 Shuffle Tmp 4 ReduceTask ReduceTask ReduceTask ReduceTask Amount of Data Amount of Data 10x 100x 1000x 10000x 100000x 1000000x Or more
  • 53. Managing Parallelism • To fight Cartesian Join – Nested Structures – Windowing – Skip Steps
  • 55. Out of luck? • Do you every run out of memory? • Do you every have more then 20 stages? • Is your driver doing a lot of work?
  • 56. Mistake – DAG Management • Shuffles are to be avoided • ReduceByKey over GroupByKey • TreeReduce over Reduce • Use Complex Types
  • 57. Mistake – DAG Management: Shuffles • Map Side Reducing if possible • Think about partitioning/bucketing ahead of time • Do as much as possible with a single Shuffle • Only send what you have to send • Avoid Skew and Cartesians
  • 58. ReduceByKey over GroupByKey • ReduceByKey can do almost anything that GroupByKey can do • Aggregations • Windowing • Use memory • But you have more control • ReduceByKey has a fixed limit of Memory requirements • GroupByKey is unbound and dependent of the data
  • 59. TreeReduce over Reduce • TreeReduce & Reduce returns a result to the driver • TreeReduce does more work on the executors • Where Reduce bring everything back to the driver Partition Partition Partition Partition Driver 100% Partition Partition Partition Partition Driver 4 25% 25% 25% 25%
  • 60. Complex Types • Top N List • Multiple types of Aggregations • Windowing operations • All in one pass
  • 61. Complex Types • Think outside of the box use objects to reduce by • (Make something simple)
  • 63. Ever seen this? Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode; at org.apache.spark.util.collection.OpenHashSet.org $apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102) at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210) at…....
  • 64. But! • I already included guava in my app’s maven dependencies?
  • 65. Ah! • My guava version doesn’t match with Spark’s guava version!
  • 68. 5 Mistakes • Size up your executors right • 2 GB limit on Spark shuffle blocks • Evil thing about skew and cartesians • Learn to manage your DAG, yo! • Do shady stuff, don’t let classpath leaks mess you up
  • 69. THANK YOU. tiny.cloudera.com/spark-mistakes Mark Grover | @mark_grover Ted Malaska | @TedMalaska