SlideShare a Scribd company logo
Top 5 Mistakes when writing
Spark applications
Mark Grover | @mark_grover | Software Engineer
Ted Malaska | @TedMalaska | Principal Solutions Architect
tiny.cloudera.com/spark-mistakes
About the book
•  @hadooparchbook
•  hadooparchitecturebook.com
•  github.com/hadooparchitecturebook
•  slideshare.com/hadooparchbook
Mistakes people make
when using Spark
Mistakes people we’ve made
when using Spark
Mistakes people make
when using Spark
Mistake # 1
# Executors, cores, memory !?!
•  6 Nodes
•  16 cores each
•  64 GB of RAM each
Decisions, decisions, decisions
•  Number of executors (--num-executors)
•  Cores for each executor (--executor-cores)
•  Memory for each executor (--executor-memory)
•  6 nodes
•  16 cores each
•  64 GB of RAM
Spark Architecture recap
Answer #1 – Most granular
•  Have smallest sized executors
possible
•  1 core each
•  64GB/node / 16 executors/node
= 4 GB/executor
•  Total of 16 cores x 6 nodes
= 96 cores => 96 executors
Worker node
Executor 6
Executor 5
Executor 4
Executor 3
Executor 2
Executor 1
Answer #1 – Most granular
•  Have smallest sized executors
possible
•  1 core each
•  64GB/node / 16 executors/node
= 4 GB/executor
•  Total of 16 cores x 6 nodes
= 96 cores => 96 executors
Worker node
Executor 6
Executor 5
Executor 4
Executor 3
Executor 2
Executor 1
Why?
•  Not using benefits of running multiple tasks in
same executor
Answer #2 – Least granular
•  6 executors in total
=>1 executor per node
•  64 GB memory each
•  16 cores each
Worker node
Executor 1
Answer #2 – Least granular
•  6 executors in total
=>1 executor per node
•  64 GB memory each
•  16 cores each
Worker node
Executor 1
Why?
•  Need to leave some memory overhead for OS/
Hadoop daemons
Answer #3 – with overhead
•  6 executors – 1 executor/node
•  63 GB memory each
•  15 cores each
Worker node
Executor 1
Overhead(1G,1 core)
Answer #3 – with overhead
•  6 executors – 1 executor/node
•  63 GB memory each
•  15 cores each
Worker node
Executor 1
Overhead(1G,1 core)
Let’s assume…
•  You are running Spark on YARN, from here on…
3 things
•  3 other things to keep in mind
#1 – Memory overhead
•  --executor-memory controls the heap size
•  Need some overhead (controlled by
spark.yarn.executor.memory.overhead) for off heap memory
•  Default is max(384MB, .07 * spark.executor.memory)
#2 - YARN AM needs a core: Client
mode
#2 YARN AM needs a core: Cluster
mode
#3 HDFS Throughput
•  15 cores per executor can lead to bad HDFS I/O
throughput.
•  Best is to keep under 5 cores per executor
Calculations
•  5 cores per executor
–  For max HDFS throughput
•  Cluster has 6 * 15 = 90 cores in total
after taking out Hadoop/Yarn daemon cores)
•  90 cores / 5 cores/executor
= 18 executors
•  Each node has 3 executors
•  63 GB/3 = 21 GB, 21 x (1-0.07)
~ 19 GB
•  1 executor for AM => 17 executors
Overhead
Worker node
Executor 3
Executor 2
Executor 1
Correct answer
•  17 executors in total
•  19 GB memory/executor
•  5 cores/executor
* Not etched in stone
Overhead
Worker node
Executor 3
Executor 2
Executor 1
Dynamic allocation helps with
though, right?
•  Dynamic allocation allows Spark to dynamically
scale the cluster resources allocated to your
application based on the workload.
•  Works with Spark-On-Yarn
Decisions with Dynamic Allocation
•  Number of executors (--num-executors)
•  Cores for each executor (--executor-cores)
•  Memory for each executor (--executor-memory)
•  6 nodes
•  16 cores each
•  64 GB of RAM
Read more
•  From a great blog post on this topic by Sandy
Ryza:
http://blog.cloudera.com/blog/2015/03/how-to-tune-
your-apache-spark-jobs-part-2/
Mistake # 2
Application failure
15/04/16 14:13:03 WARN scheduler.TaskSetManager: Lost task 19.0 in
stage 6.0 (TID 120, 10.215.149.47):
java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828) at
org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123) at
org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132) at
org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:
517) at
org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:432)
at org.apache.spark.storage.BlockManager.get(BlockManager.scala:618)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:
146) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:
70)
Why?
•  No Spark shuffle block can be greater than 2 GB
Ok, what’s a shuffle block again?
•  In MapReduce terminology, a file written from
one Mapper for a Reducer
•  The Reducer makes a local copy of this file
(reducer local copy) and then ‘reduces’ it
Defining shuffle and partition
Each yellow arrow
in this diagram
represents a
shuffle block.
Each blue block is
a partition.
Once again
•  Overflow exception if shuffle block size > 2 GB
What’s going on here?
•  Spark uses ByteBuffer as abstraction for
blocks
val buf = ByteBuffer.allocate(length.toInt)
•  ByteBuffer is limited by Integer.MAX_SIZE (2 GB)!
Spark SQL
•  Especially problematic for Spark SQL
•  Default number of partitions to use when doing
shuffles is 200
–  This low number of partitions leads to high shuffle
block size
Umm, ok, so what can I do?
1.  Increase the number of partitions
–  Thereby, reducing the average partition size
2.  Get rid of skew in your data
–  More on that later
Umm, how exactly?
•  In Spark SQL, increase the value of
spark.sql.shuffle.partitions
•  In regular Spark applications, use
rdd.repartition() or rdd.coalesce()
(latter to reduce #partitions, if needed)
But, how many partitions should I
have?
•  Rule of thumb is around 128 MB per partition
But! There’s more!
•  Spark uses a different data structure for
bookkeeping during shuffles, when the number
of partitions is less than 2000, vs. more than
2000.
Don’t believe me?
•  In MapStatus.scala
def apply(loc: BlockManagerId, uncompressedSizes:
Array[Long]): MapStatus = {
if (uncompressedSizes.length > 2000) {
HighlyCompressedMapStatus(loc, uncompressedSizes)
} else {
new CompressedMapStatus(loc, uncompressedSizes)
}
}
Ok, so what are you saying?
If number of partitions < 2000, but not by much,
bump it to be slightly higher than 2000.
Can you summarize, please?
•  Don’t have too big partitions
–  Your job will fail due to 2 GB limit
•  Don’t have too few partitions
–  Your job will be slow, not making using of parallelism
•  Rule of thumb: ~128 MB per partition
•  If #partitions < 2000, but close, bump to just > 2000
•  Track SPARK-6235 for removing various 2 GB limits
Mistake # 3
Slow jobs on Join/Shuffle
•  Your dataset takes 20 seconds to run over with a
map job, but take 4 hours when joined or
shuffled. What wrong?
Mistake - Skew
Single Thread
Single Thread
Single Thread
Single Thread
Single Thread
Single Thread
Single Thread
Normal
Distributed
The Holy Grail of Distributed Systems
Mistake - Skew
Single Thread
Normal
Distributed
What about Skew, because that is a thing
Mistake – Skew : Answers
•  Salting
•  Isolated Salting
•  Isolated Map Joins
Mistake – Skew : Salting
•  Normal Key: “Foo”
•  Salted Key: “Foo” + random.nextInt(saltFactor)
Managing Parallelism
Mistake – Skew: Salting
Add Example Slide
©2014 Cloudera, Inc. All rights reserved.
Mistake – Skew : Salting
•  Two Stage Aggregation
–  Stage one to do operations on the salted keys
–  Stage two to do operation access unsalted key results
Data Source Map
Convert to
Salted Key & Value
Tuple
Reduce
By Salted Key
Map Convert
results to
Key & Value
Tuple
Reduce
By Key
Results
Mistake – Skew : Isolated Salting
•  Second Stage only required for Isolated Keys
Data Source Map
Convert to
Key & Value
Isolate Key and
convert to
Salted Key &
Value
Tuple
Reduce
By Key &
Salted Key
Filter Isolated
Keys
From Salted
Keys
Map Convert
results to
Key & Value
Tuple
Reduce
By Key
Union to
Results
Mistake – Skew : Isolated Map Join
•  Filter Out Isolated Keys and use Map Join/
Aggregate on those
•  And normal reduce on the rest of the data
•  This can remove a large amount of data being
shuffled
Data Source Filter Normal
Keys
From Isolated
Keys
Reduce
By Normal Key
Union to
Results
Map Join
For Isolated
Keys
Managing Parallelism
Cartesian Join
Map Task
Shuffle Tmp 1
Shuffle Tmp 2
Shuffle Tmp 3
Shuffle Tmp 4
Map Task
Shuffle Tmp 1
Shuffle Tmp 2
Shuffle Tmp 3
Shuffle Tmp 4
Map Task
Shuffle Tmp 1
Shuffle Tmp 2
Shuffle Tmp 3
Shuffle Tmp 4
ReduceTask
ReduceTask
ReduceTask
ReduceTask
Amount
of Data
Amount of Data
10x
100x
1000x
10000x
100000x
1000000x
Or more
Table YTable X
Managing Parallelism
•  How To fight Cartesian Join
–  Nested Structures
A, 1
A, 2
A, 3
A, 4
A, 5
A, 6
Table X
A, 1, 4
A, 2, 4
A, 3, 4
A, 1, 5
A, 2, 5
A, 3, 5
A, 1, 6
A, 2, 6
A, 3, 6
JOIN OR
Table X
A
A, 1
A, 2
A, 3
A, 4
A, 5
A, 6
Managing Parallelism
•  How To fight Cartesian Join
–  Nested Structures
create table nestedTable (
col1 string,
col2 string,
col3 array< struct<
col3_1: string,
col3_2: string>>
val rddNested = sc.parallelize(Array(
Row("a1", "b1", Seq(Row("c1_1",
"c2_1"),
Row("c1_2", "c2_2"),
Row("c1_3", "c2_3"))),
Row("a2", "b2", Seq(Row("c1_2",
"c2_2"),
Row("c1_3", "c2_3"),
Row("c1_4", "c2_4")))), 2)
=
Mistake # 4
Out of luck?
• Do you every run out of memory?
• Do you every have more then 20 stages?
• Is your driver doing a lot of work?
Mistake – DAG Management
• Shuffles are to be avoided
• ReduceByKey over GroupByKey
• TreeReduce over Reduce
• Use Complex/Nested Types
Mistake – DAG Management:
Shuffles
•  Map Side reduction, where possible
•  Think about partitioning/bucketing ahead of time
•  Do as much as possible with a single shuffle
•  Only send what you have to send
•  Avoid Skew and Cartesians
ReduceByKey over GroupByKey
•  ReduceByKey can do almost anything that GroupByKey
can do
•  Aggregations
•  Windowing
•  Use memory
•  But you have more control
•  ReduceByKey has a fixed limit of Memory requirements
•  GroupByKey is unbound and dependent on data
TreeReduce over Reduce
•  TreeReduce & Reduce return some result to driver
•  TreeReduce does more work on the executors
•  While Reduce bring everything back to the driver
Partition
Partition
Partition
Partition
Driver
100%
Partition
Partition
Partition
Partition
Driver
4
25%
25%
25%
25%
Complex Types
• Top N List
• Multiple types of Aggregations
• Windowing operations
• All in one pass
Complex Types
•  Think outside of the box use objects to reduce by
•  (Make something simple)
Mistake # 5
Ever seen this?
Exception in thread "main" java.lang.NoSuchMethodError:
com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
at org.apache.spark.util.collection.OpenHashSet.org
$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
at
org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
at
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
at
org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at
org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
at…....
But!
• I already included protobuf in my app’s
maven dependencies?
Ah!
• My protobuf version doesn’t match with
Spark’s protobuf version!
Shading
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.2</version>
...
<relocations>
<relocation>
<pattern>com.google.protobuf</pattern>
<shadedPattern>com.company.my.protobuf</shadedPattern>
</relocation>
</relocations>
Future of shading
• Spark 2.0 has some libraries shaded
• Gauva is fully shaded
Summary
5 Mistakes
• Size up your executors right
• 2 GB limit on Spark shuffle blocks
• Evil thing about skew and cartesians
• Learn to manage your DAG, yo!
• Do shady stuff, don’t let classpath leaks
mess you up
THANK YOU.
tiny.cloudera.com/spark-mistakes
Mark Grover | @mark_grover
Ted Malaska | @TedMalaska

More Related Content

What's hot (20)

PDF
What is Apache Kafka and What is an Event Streaming Platform?
confluent
 
PDF
Spark shuffle introduction
colorant
 
PDF
Deep Dive: Memory Management in Apache Spark
Databricks
 
PDF
Reliable Performance at Scale with Apache Spark on Kubernetes
Databricks
 
PDF
Spark SQL: Another 16x Faster After Tungsten: Spark Summit East talk by Brad ...
Spark Summit
 
PPTX
Why your Spark Job is Failing
DataWorks Summit
 
PDF
Apache Spark Core—Deep Dive—Proper Optimization
Databricks
 
PPTX
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Bo Yang
 
PDF
Hive Bucketing in Apache Spark with Tejas Patil
Databricks
 
PPTX
Spark streaming
Whiteklay
 
PPTX
Processing Large Data with Apache Spark -- HasGeek
Venkata Naga Ravi
 
PDF
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
Databricks
 
PDF
Apache Spark Introduction
sudhakara st
 
PDF
Understanding Query Plans and Spark UIs
Databricks
 
PDF
Top 5 mistakes when writing Spark applications
hadooparchbook
 
PDF
Parallelizing with Apache Spark in Unexpected Ways
Databricks
 
PPTX
Stephan Ewen - Experiences running Flink at Very Large Scale
Ververica
 
KEY
Modern Algorithms and Data Structures - 1. Bloom Filters, Merkle Trees
Lorenzo Alberton
 
PDF
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Databricks
 
What is Apache Kafka and What is an Event Streaming Platform?
confluent
 
Spark shuffle introduction
colorant
 
Deep Dive: Memory Management in Apache Spark
Databricks
 
Reliable Performance at Scale with Apache Spark on Kubernetes
Databricks
 
Spark SQL: Another 16x Faster After Tungsten: Spark Summit East talk by Brad ...
Spark Summit
 
Why your Spark Job is Failing
DataWorks Summit
 
Apache Spark Core—Deep Dive—Proper Optimization
Databricks
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Bo Yang
 
Hive Bucketing in Apache Spark with Tejas Patil
Databricks
 
Spark streaming
Whiteklay
 
Processing Large Data with Apache Spark -- HasGeek
Venkata Naga Ravi
 
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
Databricks
 
Apache Spark Introduction
sudhakara st
 
Understanding Query Plans and Spark UIs
Databricks
 
Top 5 mistakes when writing Spark applications
hadooparchbook
 
Parallelizing with Apache Spark in Unexpected Ways
Databricks
 
Stephan Ewen - Experiences running Flink at Very Large Scale
Ververica
 
Modern Algorithms and Data Structures - 1. Bloom Filters, Merkle Trees
Lorenzo Alberton
 
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...
Databricks
 

Viewers also liked (20)

PPTX
Hive on Spark の設計指針を読んでみた
Recruit Technologies
 
PDF
SparkやBigQueryなどを用いた モバイルゲーム分析環境
yuichi_komatsu
 
PDF
[db tech showcase Tokyo 2017] AzureでOSS DB/データ処理基盤のPaaSサービスを使ってみよう (Azure Dat...
Naoki (Neo) SATO
 
PDF
SparkMLlibで始めるビッグデータを対象とした機械学習入門
Takeshi Mikami
 
PPTX
SEGA : Growth hacking by Spark ML for Mobile games
DataWorks Summit/Hadoop Summit
 
PPTX
Case Study: OLAP usability on Spark and Hadoop
DataWorks Summit/Hadoop Summit
 
PDF
Business Innovation cases driven by AI and BigData technologies
DataWorks Summit/Hadoop Summit
 
PDF
1000台規模のHadoopクラスタをHive/Tezアプリケーションにあわせてパフォーマンスチューニングした話
Yahoo!デベロッパーネットワーク
 
PPTX
データドリブン企業におけるHadoop基盤とETL -niconicoでの実践例-
Makoto SHIMURA
 
PDF
モバイル開発を支えるAWS Mobile Services
Keisuke Nishitani
 
PDF
Sparkを活用したレコメンドエンジンのパフォーマンスチューニング&自動化
Nagato Kasaki
 
PDF
データ分析グループの組織編制とその課題 マーケティングにおけるKPI設計の失敗例 ABテストの活用と、機械学習の導入 #CWT2016
Tokoroten Nakayama
 
PDF
Apache Hadoopを利用したビッグデータ分析基盤
Hortonworks Japan
 
PDF
20分でおさらいするサーバレスアーキテクチャ 「サーバレスの薄い本ダイジェスト」 #serverlesstokyo
Masahiro NAKAYAMA
 
PDF
大規模データに対するデータサイエンスの進め方 #CWT2016
Cloudera Japan
 
PDF
What the Spark!? Intro and Use Cases
Aerospike, Inc.
 
PDF
Hadoop’s Impact on Recruit Company
Recruit Technologies
 
PDF
ちょっと理解に自信がないな という皆さまに贈るHadoop/Sparkのキホン (IBM Datapalooza Tokyo 2016講演資料)
hamaken
 
PPTX
sparksql-hive-bench-by-nec-hwx-at-hcj16
Yifeng Jiang
 
PDF
40分でわかるHadoop徹底入門 (Cloudera World Tokyo 2014 講演資料)
hamaken
 
Hive on Spark の設計指針を読んでみた
Recruit Technologies
 
SparkやBigQueryなどを用いた モバイルゲーム分析環境
yuichi_komatsu
 
[db tech showcase Tokyo 2017] AzureでOSS DB/データ処理基盤のPaaSサービスを使ってみよう (Azure Dat...
Naoki (Neo) SATO
 
SparkMLlibで始めるビッグデータを対象とした機械学習入門
Takeshi Mikami
 
SEGA : Growth hacking by Spark ML for Mobile games
DataWorks Summit/Hadoop Summit
 
Case Study: OLAP usability on Spark and Hadoop
DataWorks Summit/Hadoop Summit
 
Business Innovation cases driven by AI and BigData technologies
DataWorks Summit/Hadoop Summit
 
1000台規模のHadoopクラスタをHive/Tezアプリケーションにあわせてパフォーマンスチューニングした話
Yahoo!デベロッパーネットワーク
 
データドリブン企業におけるHadoop基盤とETL -niconicoでの実践例-
Makoto SHIMURA
 
モバイル開発を支えるAWS Mobile Services
Keisuke Nishitani
 
Sparkを活用したレコメンドエンジンのパフォーマンスチューニング&自動化
Nagato Kasaki
 
データ分析グループの組織編制とその課題 マーケティングにおけるKPI設計の失敗例 ABテストの活用と、機械学習の導入 #CWT2016
Tokoroten Nakayama
 
Apache Hadoopを利用したビッグデータ分析基盤
Hortonworks Japan
 
20分でおさらいするサーバレスアーキテクチャ 「サーバレスの薄い本ダイジェスト」 #serverlesstokyo
Masahiro NAKAYAMA
 
大規模データに対するデータサイエンスの進め方 #CWT2016
Cloudera Japan
 
What the Spark!? Intro and Use Cases
Aerospike, Inc.
 
Hadoop’s Impact on Recruit Company
Recruit Technologies
 
ちょっと理解に自信がないな という皆さまに贈るHadoop/Sparkのキホン (IBM Datapalooza Tokyo 2016講演資料)
hamaken
 
sparksql-hive-bench-by-nec-hwx-at-hcj16
Yifeng Jiang
 
40分でわかるHadoop徹底入門 (Cloudera World Tokyo 2014 講演資料)
hamaken
 
Ad

Similar to Top 5 mistakes when writing Spark applications (20)

PDF
Top 5 mistakes when writing Spark applications
markgrover
 
PDF
Top 5 mistakes when writing Spark applications
markgrover
 
PDF
Top 5 Mistakes When Writing Spark Applications by Mark Grover and Ted Malaska
Spark Summit
 
PPTX
Spark autotuning talk final
Rachel Warren
 
PDF
Debugging & Tuning in Spark
Shiao-An Yuan
 
PPTX
Spark Gotchas and Lessons Learned
Jen Waller
 
PDF
Spark Autotuning - Strata EU 2018
Holden Karau
 
PDF
Spark Autotuning Talk - Strata New York
Holden Karau
 
PPTX
Understanding Spark Tuning: Strata New York
Rachel Warren
 
PDF
Using apache spark for processing trillions of records each day at Datadog
Vadim Semenov
 
PPTX
Spark Tips & Tricks
Jason Hubbard
 
PDF
Spark Gotchas and Lessons Learned (2/20/20)
Jen Waller
 
PPTX
Tuning tips for Apache Spark Jobs
Samir Bessalah
 
PDF
Understanding Memory Management In Spark For Fun And Profit
Spark Summit
 
PDF
No more struggles with Apache Spark workloads in production
Chetan Khatri
 
PDF
Advanced spark training advanced spark internals and tuning reynold xin
caidezhi655
 
PDF
Spark / Mesos Cluster Optimization
ebiznext
 
PDF
10 things i wish i'd known before using spark in production
Paris Data Engineers !
 
PDF
Spark Summit EU talk by Qifan Pu
Spark Summit
 
PPTX
GlobalLogic Webinar: Massive aggregations with Spark and Hadoop
GlobalLogic Ukraine
 
Top 5 mistakes when writing Spark applications
markgrover
 
Top 5 mistakes when writing Spark applications
markgrover
 
Top 5 Mistakes When Writing Spark Applications by Mark Grover and Ted Malaska
Spark Summit
 
Spark autotuning talk final
Rachel Warren
 
Debugging & Tuning in Spark
Shiao-An Yuan
 
Spark Gotchas and Lessons Learned
Jen Waller
 
Spark Autotuning - Strata EU 2018
Holden Karau
 
Spark Autotuning Talk - Strata New York
Holden Karau
 
Understanding Spark Tuning: Strata New York
Rachel Warren
 
Using apache spark for processing trillions of records each day at Datadog
Vadim Semenov
 
Spark Tips & Tricks
Jason Hubbard
 
Spark Gotchas and Lessons Learned (2/20/20)
Jen Waller
 
Tuning tips for Apache Spark Jobs
Samir Bessalah
 
Understanding Memory Management In Spark For Fun And Profit
Spark Summit
 
No more struggles with Apache Spark workloads in production
Chetan Khatri
 
Advanced spark training advanced spark internals and tuning reynold xin
caidezhi655
 
Spark / Mesos Cluster Optimization
ebiznext
 
10 things i wish i'd known before using spark in production
Paris Data Engineers !
 
Spark Summit EU talk by Qifan Pu
Spark Summit
 
GlobalLogic Webinar: Massive aggregations with Spark and Hadoop
GlobalLogic Ukraine
 
Ad

More from hadooparchbook (20)

PDF
Architecting a next generation data platform
hadooparchbook
 
PDF
Architecting a next-generation data platform
hadooparchbook
 
PDF
Top 5 mistakes when writing Streaming applications
hadooparchbook
 
PDF
Architecting a Next Generation Data Platform
hadooparchbook
 
PDF
What no one tells you about writing a streaming app
hadooparchbook
 
PDF
Architecting next generation big data platform
hadooparchbook
 
PDF
Hadoop application architectures - using Customer 360 as an example
hadooparchbook
 
PDF
Streaming architecture patterns
hadooparchbook
 
PDF
Hadoop application architectures - Fraud detection tutorial
hadooparchbook
 
PDF
Hadoop application architectures - Fraud detection tutorial
hadooparchbook
 
PDF
Architectural Patterns for Streaming Applications
hadooparchbook
 
PDF
Hadoop Application Architectures - Fraud Detection
hadooparchbook
 
PDF
Architecting application with Hadoop - using clickstream analytics as an example
hadooparchbook
 
PDF
Architecting applications with Hadoop - Fraud Detection
hadooparchbook
 
PDF
Fraud Detection using Hadoop
hadooparchbook
 
PDF
Hadoop Application Architectures tutorial - Strata London
hadooparchbook
 
PPTX
Data warehousing with Hadoop
hadooparchbook
 
PDF
Hadoop Application Architectures tutorial at Big DataService 2015
hadooparchbook
 
PDF
Architectural considerations for Hadoop Applications
hadooparchbook
 
PDF
Application Architectures with Hadoop
hadooparchbook
 
Architecting a next generation data platform
hadooparchbook
 
Architecting a next-generation data platform
hadooparchbook
 
Top 5 mistakes when writing Streaming applications
hadooparchbook
 
Architecting a Next Generation Data Platform
hadooparchbook
 
What no one tells you about writing a streaming app
hadooparchbook
 
Architecting next generation big data platform
hadooparchbook
 
Hadoop application architectures - using Customer 360 as an example
hadooparchbook
 
Streaming architecture patterns
hadooparchbook
 
Hadoop application architectures - Fraud detection tutorial
hadooparchbook
 
Hadoop application architectures - Fraud detection tutorial
hadooparchbook
 
Architectural Patterns for Streaming Applications
hadooparchbook
 
Hadoop Application Architectures - Fraud Detection
hadooparchbook
 
Architecting application with Hadoop - using clickstream analytics as an example
hadooparchbook
 
Architecting applications with Hadoop - Fraud Detection
hadooparchbook
 
Fraud Detection using Hadoop
hadooparchbook
 
Hadoop Application Architectures tutorial - Strata London
hadooparchbook
 
Data warehousing with Hadoop
hadooparchbook
 
Hadoop Application Architectures tutorial at Big DataService 2015
hadooparchbook
 
Architectural considerations for Hadoop Applications
hadooparchbook
 
Application Architectures with Hadoop
hadooparchbook
 

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
Market Wrap for 18th July 2025 by CIFDAQ
CIFDAQ
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PDF
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PPTX
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
PDF
Integrating IIoT with SCADA in Oil & Gas A Technical Perspective.pdf
Rejig Digital
 
PDF
Per Axbom: The spectacular lies of maps
Nexer Digital
 
PDF
visibel.ai Company Profile – Real-Time AI Solution for CCTV
visibelaiproject
 
PDF
introduction to computer hardware and sofeware
chauhanshraddha2007
 
PPTX
Earn Agentblazer Status with Slack Community Patna.pptx
SanjeetMishra29
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
OpenInfra ID 2025 - Are Containers Dying? Rethinking Isolation with MicroVMs.pdf
Muhammad Yuga Nugraha
 
PDF
Lecture A - AI Workflows for Banking.pdf
Dr. LAM Yat-fai (林日辉)
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PDF
SalesForce Managed Services Benefits (1).pdf
TechForce Services
 
PDF
The Past, Present & Future of Kenya's Digital Transformation
Moses Kemibaro
 
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
Market Wrap for 18th July 2025 by CIFDAQ
CIFDAQ
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Agentic AI in Healthcare Driving the Next Wave of Digital Transformation
danielle hunter
 
Integrating IIoT with SCADA in Oil & Gas A Technical Perspective.pdf
Rejig Digital
 
Per Axbom: The spectacular lies of maps
Nexer Digital
 
visibel.ai Company Profile – Real-Time AI Solution for CCTV
visibelaiproject
 
introduction to computer hardware and sofeware
chauhanshraddha2007
 
Earn Agentblazer Status with Slack Community Patna.pptx
SanjeetMishra29
 
The Future of Artificial Intelligence (AI)
Mukul
 
OpenInfra ID 2025 - Are Containers Dying? Rethinking Isolation with MicroVMs.pdf
Muhammad Yuga Nugraha
 
Lecture A - AI Workflows for Banking.pdf
Dr. LAM Yat-fai (林日辉)
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
SalesForce Managed Services Benefits (1).pdf
TechForce Services
 
The Past, Present & Future of Kenya's Digital Transformation
Moses Kemibaro
 

Top 5 mistakes when writing Spark applications

  • 1. Top 5 Mistakes when writing Spark applications Mark Grover | @mark_grover | Software Engineer Ted Malaska | @TedMalaska | Principal Solutions Architect tiny.cloudera.com/spark-mistakes
  • 2. About the book •  @hadooparchbook •  hadooparchitecturebook.com •  github.com/hadooparchitecturebook •  slideshare.com/hadooparchbook
  • 4. Mistakes people we’ve made when using Spark
  • 7. # Executors, cores, memory !?! •  6 Nodes •  16 cores each •  64 GB of RAM each
  • 8. Decisions, decisions, decisions •  Number of executors (--num-executors) •  Cores for each executor (--executor-cores) •  Memory for each executor (--executor-memory) •  6 nodes •  16 cores each •  64 GB of RAM
  • 10. Answer #1 – Most granular •  Have smallest sized executors possible •  1 core each •  64GB/node / 16 executors/node = 4 GB/executor •  Total of 16 cores x 6 nodes = 96 cores => 96 executors Worker node Executor 6 Executor 5 Executor 4 Executor 3 Executor 2 Executor 1
  • 11. Answer #1 – Most granular •  Have smallest sized executors possible •  1 core each •  64GB/node / 16 executors/node = 4 GB/executor •  Total of 16 cores x 6 nodes = 96 cores => 96 executors Worker node Executor 6 Executor 5 Executor 4 Executor 3 Executor 2 Executor 1
  • 12. Why? •  Not using benefits of running multiple tasks in same executor
  • 13. Answer #2 – Least granular •  6 executors in total =>1 executor per node •  64 GB memory each •  16 cores each Worker node Executor 1
  • 14. Answer #2 – Least granular •  6 executors in total =>1 executor per node •  64 GB memory each •  16 cores each Worker node Executor 1
  • 15. Why? •  Need to leave some memory overhead for OS/ Hadoop daemons
  • 16. Answer #3 – with overhead •  6 executors – 1 executor/node •  63 GB memory each •  15 cores each Worker node Executor 1 Overhead(1G,1 core)
  • 17. Answer #3 – with overhead •  6 executors – 1 executor/node •  63 GB memory each •  15 cores each Worker node Executor 1 Overhead(1G,1 core)
  • 18. Let’s assume… •  You are running Spark on YARN, from here on…
  • 19. 3 things •  3 other things to keep in mind
  • 20. #1 – Memory overhead •  --executor-memory controls the heap size •  Need some overhead (controlled by spark.yarn.executor.memory.overhead) for off heap memory •  Default is max(384MB, .07 * spark.executor.memory)
  • 21. #2 - YARN AM needs a core: Client mode
  • 22. #2 YARN AM needs a core: Cluster mode
  • 23. #3 HDFS Throughput •  15 cores per executor can lead to bad HDFS I/O throughput. •  Best is to keep under 5 cores per executor
  • 24. Calculations •  5 cores per executor –  For max HDFS throughput •  Cluster has 6 * 15 = 90 cores in total after taking out Hadoop/Yarn daemon cores) •  90 cores / 5 cores/executor = 18 executors •  Each node has 3 executors •  63 GB/3 = 21 GB, 21 x (1-0.07) ~ 19 GB •  1 executor for AM => 17 executors Overhead Worker node Executor 3 Executor 2 Executor 1
  • 25. Correct answer •  17 executors in total •  19 GB memory/executor •  5 cores/executor * Not etched in stone Overhead Worker node Executor 3 Executor 2 Executor 1
  • 26. Dynamic allocation helps with though, right? •  Dynamic allocation allows Spark to dynamically scale the cluster resources allocated to your application based on the workload. •  Works with Spark-On-Yarn
  • 27. Decisions with Dynamic Allocation •  Number of executors (--num-executors) •  Cores for each executor (--executor-cores) •  Memory for each executor (--executor-memory) •  6 nodes •  16 cores each •  64 GB of RAM
  • 28. Read more •  From a great blog post on this topic by Sandy Ryza: http://blog.cloudera.com/blog/2015/03/how-to-tune- your-apache-spark-jobs-part-2/
  • 30. Application failure 15/04/16 14:13:03 WARN scheduler.TaskSetManager: Lost task 19.0 in stage 6.0 (TID 120, 10.215.149.47): java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828) at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:123) at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:132) at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala: 517) at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:432) at org.apache.spark.storage.BlockManager.get(BlockManager.scala:618) at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala: 146) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala: 70)
  • 31. Why? •  No Spark shuffle block can be greater than 2 GB
  • 32. Ok, what’s a shuffle block again? •  In MapReduce terminology, a file written from one Mapper for a Reducer •  The Reducer makes a local copy of this file (reducer local copy) and then ‘reduces’ it
  • 33. Defining shuffle and partition Each yellow arrow in this diagram represents a shuffle block. Each blue block is a partition.
  • 34. Once again •  Overflow exception if shuffle block size > 2 GB
  • 35. What’s going on here? •  Spark uses ByteBuffer as abstraction for blocks val buf = ByteBuffer.allocate(length.toInt) •  ByteBuffer is limited by Integer.MAX_SIZE (2 GB)!
  • 36. Spark SQL •  Especially problematic for Spark SQL •  Default number of partitions to use when doing shuffles is 200 –  This low number of partitions leads to high shuffle block size
  • 37. Umm, ok, so what can I do? 1.  Increase the number of partitions –  Thereby, reducing the average partition size 2.  Get rid of skew in your data –  More on that later
  • 38. Umm, how exactly? •  In Spark SQL, increase the value of spark.sql.shuffle.partitions •  In regular Spark applications, use rdd.repartition() or rdd.coalesce() (latter to reduce #partitions, if needed)
  • 39. But, how many partitions should I have? •  Rule of thumb is around 128 MB per partition
  • 40. But! There’s more! •  Spark uses a different data structure for bookkeeping during shuffles, when the number of partitions is less than 2000, vs. more than 2000.
  • 41. Don’t believe me? •  In MapStatus.scala def apply(loc: BlockManagerId, uncompressedSizes: Array[Long]): MapStatus = { if (uncompressedSizes.length > 2000) { HighlyCompressedMapStatus(loc, uncompressedSizes) } else { new CompressedMapStatus(loc, uncompressedSizes) } }
  • 42. Ok, so what are you saying? If number of partitions < 2000, but not by much, bump it to be slightly higher than 2000.
  • 43. Can you summarize, please? •  Don’t have too big partitions –  Your job will fail due to 2 GB limit •  Don’t have too few partitions –  Your job will be slow, not making using of parallelism •  Rule of thumb: ~128 MB per partition •  If #partitions < 2000, but close, bump to just > 2000 •  Track SPARK-6235 for removing various 2 GB limits
  • 45. Slow jobs on Join/Shuffle •  Your dataset takes 20 seconds to run over with a map job, but take 4 hours when joined or shuffled. What wrong?
  • 46. Mistake - Skew Single Thread Single Thread Single Thread Single Thread Single Thread Single Thread Single Thread Normal Distributed The Holy Grail of Distributed Systems
  • 47. Mistake - Skew Single Thread Normal Distributed What about Skew, because that is a thing
  • 48. Mistake – Skew : Answers •  Salting •  Isolated Salting •  Isolated Map Joins
  • 49. Mistake – Skew : Salting •  Normal Key: “Foo” •  Salted Key: “Foo” + random.nextInt(saltFactor)
  • 51. Mistake – Skew: Salting
  • 52. Add Example Slide ©2014 Cloudera, Inc. All rights reserved.
  • 53. Mistake – Skew : Salting •  Two Stage Aggregation –  Stage one to do operations on the salted keys –  Stage two to do operation access unsalted key results Data Source Map Convert to Salted Key & Value Tuple Reduce By Salted Key Map Convert results to Key & Value Tuple Reduce By Key Results
  • 54. Mistake – Skew : Isolated Salting •  Second Stage only required for Isolated Keys Data Source Map Convert to Key & Value Isolate Key and convert to Salted Key & Value Tuple Reduce By Key & Salted Key Filter Isolated Keys From Salted Keys Map Convert results to Key & Value Tuple Reduce By Key Union to Results
  • 55. Mistake – Skew : Isolated Map Join •  Filter Out Isolated Keys and use Map Join/ Aggregate on those •  And normal reduce on the rest of the data •  This can remove a large amount of data being shuffled Data Source Filter Normal Keys From Isolated Keys Reduce By Normal Key Union to Results Map Join For Isolated Keys
  • 56. Managing Parallelism Cartesian Join Map Task Shuffle Tmp 1 Shuffle Tmp 2 Shuffle Tmp 3 Shuffle Tmp 4 Map Task Shuffle Tmp 1 Shuffle Tmp 2 Shuffle Tmp 3 Shuffle Tmp 4 Map Task Shuffle Tmp 1 Shuffle Tmp 2 Shuffle Tmp 3 Shuffle Tmp 4 ReduceTask ReduceTask ReduceTask ReduceTask Amount of Data Amount of Data 10x 100x 1000x 10000x 100000x 1000000x Or more
  • 57. Table YTable X Managing Parallelism •  How To fight Cartesian Join –  Nested Structures A, 1 A, 2 A, 3 A, 4 A, 5 A, 6 Table X A, 1, 4 A, 2, 4 A, 3, 4 A, 1, 5 A, 2, 5 A, 3, 5 A, 1, 6 A, 2, 6 A, 3, 6 JOIN OR Table X A A, 1 A, 2 A, 3 A, 4 A, 5 A, 6
  • 58. Managing Parallelism •  How To fight Cartesian Join –  Nested Structures create table nestedTable ( col1 string, col2 string, col3 array< struct< col3_1: string, col3_2: string>> val rddNested = sc.parallelize(Array( Row("a1", "b1", Seq(Row("c1_1", "c2_1"), Row("c1_2", "c2_2"), Row("c1_3", "c2_3"))), Row("a2", "b2", Seq(Row("c1_2", "c2_2"), Row("c1_3", "c2_3"), Row("c1_4", "c2_4")))), 2) =
  • 60. Out of luck? • Do you every run out of memory? • Do you every have more then 20 stages? • Is your driver doing a lot of work?
  • 61. Mistake – DAG Management • Shuffles are to be avoided • ReduceByKey over GroupByKey • TreeReduce over Reduce • Use Complex/Nested Types
  • 62. Mistake – DAG Management: Shuffles •  Map Side reduction, where possible •  Think about partitioning/bucketing ahead of time •  Do as much as possible with a single shuffle •  Only send what you have to send •  Avoid Skew and Cartesians
  • 63. ReduceByKey over GroupByKey •  ReduceByKey can do almost anything that GroupByKey can do •  Aggregations •  Windowing •  Use memory •  But you have more control •  ReduceByKey has a fixed limit of Memory requirements •  GroupByKey is unbound and dependent on data
  • 64. TreeReduce over Reduce •  TreeReduce & Reduce return some result to driver •  TreeReduce does more work on the executors •  While Reduce bring everything back to the driver Partition Partition Partition Partition Driver 100% Partition Partition Partition Partition Driver 4 25% 25% 25% 25%
  • 65. Complex Types • Top N List • Multiple types of Aggregations • Windowing operations • All in one pass
  • 66. Complex Types •  Think outside of the box use objects to reduce by •  (Make something simple)
  • 68. Ever seen this? Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode; at org.apache.spark.util.collection.OpenHashSet.org $apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102) at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210) at…....
  • 69. But! • I already included protobuf in my app’s maven dependencies?
  • 70. Ah! • My protobuf version doesn’t match with Spark’s protobuf version!
  • 72. Future of shading • Spark 2.0 has some libraries shaded • Gauva is fully shaded
  • 74. 5 Mistakes • Size up your executors right • 2 GB limit on Spark shuffle blocks • Evil thing about skew and cartesians • Learn to manage your DAG, yo! • Do shady stuff, don’t let classpath leaks mess you up
  • 75. THANK YOU. tiny.cloudera.com/spark-mistakes Mark Grover | @mark_grover Ted Malaska | @TedMalaska