SlideShare a Scribd company logo
www.twosigma.com
Improving Python and Spark
Performance and Interoperability
February 9, 2017All Rights Reserved
Wes McKinney @wesmckinn
Spark Summit East 2017
February 9, 2017
Me
February 9, 2017
•  Currently: Software Architect at Two Sigma Investments
•  Creator of Python pandas project
•  PMC member for Apache Arrow and Apache Parquet
•  Other Python projects: Ibis, Feather, statsmodels
•  Formerly: Cloudera, DataPad, AQR
•  Author of Python for Data Analysis
All Rights Reserved 2
Important Legal Information
The information presented here is offered for informational purposes only and should not be
used for any other purpose (including, without limitation, the making of investment decisions).
Examples provided herein are for illustrative purposes only and are not necessarily based on
actual data. Nothing herein constitutes: an offer to sell or the solicitation of any offer to buy any
security or other interest; tax advice; or investment advice. This presentation shall remain the
property of Two Sigma Investments, LP (“Two Sigma”) and Two Sigma reserves the right to
require the return of this presentation at any time.
Some of the images, logos or other material used herein may be protected by copyright and/or
trademark. If so, such copyrights and/or trademarks are most likely owned by the entity that
created the material and are used purely for identification and comment as fair use under
international copyright and/or trademark laws. Use of such image, copyright or trademark does
not imply any association with such organization (or endorsement of such organization) by Two
Sigma, nor vice versa.
Copyright © 2017 TWO SIGMA INVESTMENTS, LP. All rights reserved
This talk
4February 9, 2017
•  Why some parts of PySpark are “slow”
•  Technology that can help make things faster
•  Work we have done to make improvements
•  Future roadmap
All Rights Reserved
Python and Spark
February 9, 2017
•  Spark is implemented in Scala, runs on the Java virtual machine (JVM)
•  Spark has Python and R APIs with partial or full coverage for many parts of
the Scala Spark API
•  In some Spark tasks, Python is only a scripting front-end.
•  This means no interpreted Python code is executed once the Spark
job starts
•  Other PySpark jobs suffer performance and interoperability issues that we’re
going to analyze in this talk
All Rights Reserved 5
Spark DataFrame performance
February 9, 2017All Rights Reserved
Source: https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html
6
Spark DataFrame performance can be misleading
February 9, 2017
•  Spark DataFrames are an example of Python as a DSL / scripting front end
•  Excepting UDFs (.map(…) or sqlContext.registerFunction), no Python code is
evaluated in the Spark job
•  Python API calls create SQL query plans inside the JVM — so Scala and
Python versions are computationally identical
All Rights Reserved 7
Spark DataFrames as deferred DSL
February 9, 2017
	
young	=	users[users.age	<	21]	
young.groupBy(“gender”).count()	
All Rights Reserved 8
Spark DataFrames as deferred DSL
February 9, 2017
SELECT	gender,	COUNT(*)	
FROM	users		
WHERE	age	<	21	
GROUP	BY	1	
All Rights Reserved 9
Spark DataFrames as deferred DSL
February 9, 2017
Aggregation[table]	
		table:	
				Table:	users	
		metrics:	
				count	=	Count[int64]	
						Table:	ref_0	
		by:	
				gender	=	Column[array(string)]	'gender'	from	users	
		predicates:	
				Less[array(boolean)]	
						age	=	Column[array(int32)]	'age'	from	users	
						Literal[int8]	
								21	
All Rights Reserved 10
Where Python code and Spark meet
February 9, 2017
•  Unfortunately, many PySpark jobs cannot be expressed entirely as
DataFrame operations or other built-in Scala constructs
•  Spark-Scala interacts with in-memory Python in key ways:
•  Reading and writing in-memory datasets to/from the Spark driver
•  Evaluating custom Python code (user-defined functions)
All Rights Reserved 11
How PySpark lambda functions work
February 9, 2017
•  The anatomy of
All Rights Reserved
rdd.map(lambda	x:	…	)	
df.withColumn(py_func(...))	
Scala
RDD
Python worker
Python worker
Python worker
Python worker
Python worker
see PythonRDD.scala
12
PySpark lambda performance problems
February 9, 2017
•  See 2016 talk “High Performance Python on Apache Spark”
•  http://www.slideshare.net/wesm/high-performance-python-on-apache-spark
•  Problems
•  Inefficient data movement (serialization / deserialization)
•  Scalar computation model: object boxing and interpreter overhead
•  General summary: PySpark is not currently designed to achieve high
performance in the way that pandas and NumPy are.
All Rights Reserved 13
Other issues with PySpark lambdas
February 9, 2017
•  Computation model unlike what pandas users are used to
•  In dataframe.map(f), the Python function f	only sees one Row at a time
•  A more natural and efficient vectorized API would be:
•  dataframe.map_pandas(lambda	df:	…)	
All Rights Reserved 14
February 9, 2017All Rights Reserved
Apache
Arrow
15
Apache Arrow: Process and Move Data Fast
February 9, 2017
•  New Top-level Apache project as of February 2016
•  Collaboration amongst broad set of OSS projects around shared needs
•  Language-independent columnar data structures
•  Metadata for describing schemas / chunks of data
•  Protocol for moving data between processes with minimal serialization
overhead
All Rights Reserved 16
High performance data interchange
February 9, 2017All Rights Reserved
Today With Arrow
Source: Apache Arrow
17
What does Apache Arrow give you?
February 9, 2017
•  Zero-copy columnar data: Complex table and array data structures that can
reference memory without copying it
•  Ultrafast messaging: Language-agnostic metadata, batch/file-based and
streaming binary formats
•  Complex schema support: Flat and nested data types
•  C++, Python, and Java Implementations: with integration tests
All Rights Reserved 18
Arrow binary wire formats
February 9, 2017All Rights Reserved 19
Extreme performance to pandas from Arrow streams
February 9, 2017All Rights Reserved 20
PyArrow file and streaming API
February 9, 2017
from	pyarrow	import	StreamReader	
	
reader	=	StreamReader(stream)	
	
#	pyarrow.Table	
table	=	reader.read_all()	
	
#	Convert	to	pandas	
df	=	table.to_pandas()	
All Rights Reserved 21
For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no
guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of
this presentation for important disclosure information.
Making DataFrame.toPandas faster
February 9, 2017
•  Background
•  Spark’s toPandas transfers in-memory from the Spark driver to Python and
converts it to a pandas.DataFrame. It is very slow
•  Joint work with Bryan Cutler (IBM), Li Jin (Two Sigma), and Yin Xusen
(IBM). See SPARK-13534 on JIRA
•  Test case: transfer 128MB Parquet file with 8 DOUBLE columns
All Rights Reserved 22
February 9, 2017All Rights Reserved
conda	install	pyarrow	-c	conda-forge	
23
For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no
guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of
this presentation for important disclosure information.
Making DataFrame.toPandas faster
February 9, 2017All Rights Reserved
df	=	sqlContext.read.parquet('example2.parquet')	
df	=	df.cache()	
df.count()	
Then
%%prun	-s	cumulative	
dfs	=	[df.toPandas()	for	i	in	range(5)]	
24
For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no
guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of
this presentation for important disclosure information.
Making DataFrame.toPandas faster
February 9, 2017All Rights Reserved
	94483943	function	calls	(94478223	primitive	calls)	in	62.492	seconds	
	
			ncalls		tottime		percall		cumtime		percall	filename:lineno(function)	
								5				1.458				0.292			62.492			12.498	dataframe.py:1570(toPandas)	
								5				0.661				0.132			54.759			10.952	dataframe.py:382(collect)	
	10485765				0.669				0.000			46.823				0.000	rdd.py:121(_load_from_socket)	
						715				0.002				0.000			46.139				0.065	serializers.py:141(load_stream)	
						710				0.002				0.000			45.950				0.065	serializers.py:448(loads)	
	10485760				4.969				0.000			32.853				0.000	types.py:595(fromInternal)	
					1391				0.004				0.000				7.445				0.005	socket.py:562(readinto)	
							18				0.000				0.000				7.283				0.405	java_gateway.py:1006(send_command)	
								5				0.000				0.000				6.262				1.252	frame.py:943(from_records)	
25
For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no
guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of
this presentation for important disclosure information.
Making DataFrame.toPandas faster
February 9, 2017All Rights Reserved
Now,	using	pyarrow	
	
%%prun	-s	cumulative	
dfs	=	[df.toPandas(useArrow)	for	i	in	range(5)]	
26
For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no
guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of
this presentation for important disclosure information.
Making DataFrame.toPandas faster
February 9, 2017All Rights Reserved
	38585	function	calls	(38535	primitive	calls)	in	9.448	seconds	
	
			ncalls		tottime		percall		cumtime		percall	filename:lineno(function)	
								5				0.001				0.000				9.448				1.890	dataframe.py:1570(toPandas)	
								5				0.000				0.000				9.358				1.872	dataframe.py:394(collectAsArrow)	
					6271				9.330				0.001				9.330				0.001	{method	'recv_into'	of	'_socket.socket‘}	
							15				0.000				0.000				9.229				0.615	java_gateway.py:860(send_command)	
							10				0.000				0.000				0.123				0.012	serializers.py:141(load_stream)	
								5				0.085				0.017				0.089				0.018	{method	'to_pandas'	of	'pyarrow.Table‘}	
27
pip	install	memory_profiler	
February 9, 2017All Rights Reserved
%%memit	-i	0.0001	
pdf	=	None	
pdf	=	df.toPandas()	
gc.collect()	
	
peak	memory:	1223.16	MiB,		
increment:	1018.20	MiB	
	
	
28
Plot thickens: memory use
February 9, 2017All Rights Reserved
%%memit	-i	0.0001	
pdf	=	None	
pdf	=	df.toPandas(useArrow=True)	
gc.collect()	
	
peak	memory:	334.08	MiB,		
increment:	258.31	MiB	
	
29
Summary of results
February 9, 2017
•  Current version: average 12.5s (10.2 MB/s)
•  Deseralization accounts for 88% of time; the rest is waiting for Spark to
send the data
•  Peak memory use 8x (~1GB) the size of the dataset
•  Arrow version
•  Average wall clock time of 1.89s (6.61x faster, 67.7 MB/s)
•  Deserialization accounts for 1% of total time
•  Peak memory use 2x the size of the dataset (1 memory doubling)
•  Time for Spark to send data 25% higher (1866ms vs 1488 ms)
All Rights Reserved 30
Aside: reading Parquet directly in Python
February 9, 2017
import	pyarrow.parquet	as	pq	
	
%%timeit		
df	=	pq.read_table(‘example2.parquet’).to_pandas()	
10	loops,	best	of	3:	175	ms	per	loop	
	
	
All Rights Reserved 31
Digging deeper
February 9, 2017
•  Why does it take Spark ~1.8 seconds to send 128MB of data over the wire?
val	collectedRows	=	queryExecution.executedPlan.executeCollect()	
cnvtr.internalRowsToPayload(collectedRows,	this.schema)	
All Rights Reserved
Array[InternalRow]	
32
Digging deeper
February 9, 2017
•  In our 128MB test case, on average:
•  75% of time is being spent collecting Array[InternalRow]	from the task
executors
•  25% of the time is spent on a single-threaded conversion of all the data
from Array[InternalRow] to ArrowRecordBatch	
•  We can go much faster by performing the Spark SQL -> Arrow
conversion locally on the task executors, then streaming the batches to
Python
All Rights Reserved 33
Future architecture
February 9, 2017All Rights Reserved
Task executor
Task executor
Task executor
Task executor
Arrow RecordBatch
Arrow RecordBatch
Arrow RecordBatch
Arrow RecordBatch
Arrow Schema
Spark driver
Python
34
For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no
guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of
this presentation for important disclosure information.
Hot off the presses
February 9, 2017All Rights Reserved
			ncalls		tottime		percall		cumtime		percall	filename:lineno(function)	
								5				0.000				0.000				5.928				1.186	dataframe.py:1570(toPandas)	
								5				0.000				0.000				5.838				1.168	dataframe.py:394(collectAsArrow)	
					5919				0.005				0.000				5.824				0.001	socket.py:561(readinto)	
					5919				5.809				0.001				5.809				0.001	{method	'recv_into'	of	'_socket.socket‘}	
...	
								5				0.086				0.017				0.091				0.018	{method	'to_pandas'	of	'pyarrow.Table‘}	
35
Patch from February 8: 38% perf improvement
The work ahead
February 9, 2017
•  Luckily, speeding up toPandas and speeding up Lambda / UDF functions is
architecturally the same type of problem
•  Reasonably clear path to making toPandas even faster
•  How can you get involved?
•  Keep an eye on Spark ASF JIRA
•  Contribute to Apache Arrow (Java, C++, Python, other languages)
•  Join the Arrow and Spark mailing lists
All Rights Reserved 36
Thank you
February 9, 2017
•  Bryan Cutler, Li Jin, and Yin Xusen, for building the integration Spark-Arrow
integration
•  Apache Arrow community
•  Spark Summit organizers
•  Two Sigma and IBM, for supporting this work
All Rights Reserved 37

More Related Content

What's hot (20)

The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Databricks
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
Databricks
 
Data Vault Overview
Data Vault OverviewData Vault Overview
Data Vault Overview
Empowered Holdings, LLC
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Databricks
 
Datastage free tutorial
Datastage free tutorialDatastage free tutorial
Datastage free tutorial
tekslate1
 
Speed up UDFs with GPUs using the RAPIDS Accelerator
Speed up UDFs with GPUs using the RAPIDS AcceleratorSpeed up UDFs with GPUs using the RAPIDS Accelerator
Speed up UDFs with GPUs using the RAPIDS Accelerator
Databricks
 
The Great Debate: PostgreSQL vs MySQL
The Great Debate: PostgreSQL vs MySQLThe Great Debate: PostgreSQL vs MySQL
The Great Debate: PostgreSQL vs MySQL
EDB
 
Why Data Vault?
Why Data Vault? Why Data Vault?
Why Data Vault?
Kent Graziano
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper Optimization
Databricks
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
Ryan Blue
 
Introduction to Data Vault Modeling
Introduction to Data Vault ModelingIntroduction to Data Vault Modeling
Introduction to Data Vault Modeling
Kent Graziano
 
Agile Data Warehouse Modeling: Introduction to Data Vault Data Modeling
Agile Data Warehouse Modeling: Introduction to Data Vault Data ModelingAgile Data Warehouse Modeling: Introduction to Data Vault Data Modeling
Agile Data Warehouse Modeling: Introduction to Data Vault Data Modeling
Kent Graziano
 
Databricks Delta Lake and Its Benefits
Databricks Delta Lake and Its BenefitsDatabricks Delta Lake and Its Benefits
Databricks Delta Lake and Its Benefits
Databricks
 
Cloud-native Semantic Layer on Data Lake
Cloud-native Semantic Layer on Data LakeCloud-native Semantic Layer on Data Lake
Cloud-native Semantic Layer on Data Lake
Databricks
 
ClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTO
ClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTOClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTO
ClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTO
Altinity Ltd
 
Data Privacy with Apache Spark: Defensive and Offensive Approaches
Data Privacy with Apache Spark: Defensive and Offensive ApproachesData Privacy with Apache Spark: Defensive and Offensive Approaches
Data Privacy with Apache Spark: Defensive and Offensive Approaches
Databricks
 
Physical Plans in Spark SQL
Physical Plans in Spark SQLPhysical Plans in Spark SQL
Physical Plans in Spark SQL
Databricks
 
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark JobsFine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Databricks
 
Apache Spark Introduction
Apache Spark IntroductionApache Spark Introduction
Apache Spark Introduction
sudhakara st
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
Spark Summit
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
Databricks
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
Databricks
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Databricks
 
Datastage free tutorial
Datastage free tutorialDatastage free tutorial
Datastage free tutorial
tekslate1
 
Speed up UDFs with GPUs using the RAPIDS Accelerator
Speed up UDFs with GPUs using the RAPIDS AcceleratorSpeed up UDFs with GPUs using the RAPIDS Accelerator
Speed up UDFs with GPUs using the RAPIDS Accelerator
Databricks
 
The Great Debate: PostgreSQL vs MySQL
The Great Debate: PostgreSQL vs MySQLThe Great Debate: PostgreSQL vs MySQL
The Great Debate: PostgreSQL vs MySQL
EDB
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper Optimization
Databricks
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
Ryan Blue
 
Introduction to Data Vault Modeling
Introduction to Data Vault ModelingIntroduction to Data Vault Modeling
Introduction to Data Vault Modeling
Kent Graziano
 
Agile Data Warehouse Modeling: Introduction to Data Vault Data Modeling
Agile Data Warehouse Modeling: Introduction to Data Vault Data ModelingAgile Data Warehouse Modeling: Introduction to Data Vault Data Modeling
Agile Data Warehouse Modeling: Introduction to Data Vault Data Modeling
Kent Graziano
 
Databricks Delta Lake and Its Benefits
Databricks Delta Lake and Its BenefitsDatabricks Delta Lake and Its Benefits
Databricks Delta Lake and Its Benefits
Databricks
 
Cloud-native Semantic Layer on Data Lake
Cloud-native Semantic Layer on Data LakeCloud-native Semantic Layer on Data Lake
Cloud-native Semantic Layer on Data Lake
Databricks
 
ClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTO
ClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTOClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTO
ClickHouse on Kubernetes, by Alexander Zaitsev, Altinity CTO
Altinity Ltd
 
Data Privacy with Apache Spark: Defensive and Offensive Approaches
Data Privacy with Apache Spark: Defensive and Offensive ApproachesData Privacy with Apache Spark: Defensive and Offensive Approaches
Data Privacy with Apache Spark: Defensive and Offensive Approaches
Databricks
 
Physical Plans in Spark SQL
Physical Plans in Spark SQLPhysical Plans in Spark SQL
Physical Plans in Spark SQL
Databricks
 
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark JobsFine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Databricks
 
Apache Spark Introduction
Apache Spark IntroductionApache Spark Introduction
Apache Spark Introduction
sudhakara st
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
Spark Summit
 

Similar to Improving Python and Spark Performance and Interoperability: Spark Summit East talk by: Wes McKinney (20)

Improving Pandas and PySpark interoperability with Apache Arrow
Improving Pandas and PySpark interoperability with Apache ArrowImproving Pandas and PySpark interoperability with Apache Arrow
Improving Pandas and PySpark interoperability with Apache Arrow
Li Jin
 
Improving Pandas and PySpark performance and interoperability with Apache Arrow
Improving Pandas and PySpark performance and interoperability with Apache ArrowImproving Pandas and PySpark performance and interoperability with Apache Arrow
Improving Pandas and PySpark performance and interoperability with Apache Arrow
PyData
 
Future of Pandas - Jeff Reback
Future of Pandas - Jeff RebackFuture of Pandas - Jeff Reback
Future of Pandas - Jeff Reback
Two Sigma
 
Future of pandas
Future of pandasFuture of pandas
Future of pandas
Jeff Reback
 
Python Data Wrangling: Preparing for the Future
Python Data Wrangling: Preparing for the FuturePython Data Wrangling: Preparing for the Future
Python Data Wrangling: Preparing for the Future
Wes McKinney
 
Vectorized UDF: Scalable Analysis with Python and PySpark with Li Jin
Vectorized UDF: Scalable Analysis with Python and PySpark with Li JinVectorized UDF: Scalable Analysis with Python and PySpark with Li Jin
Vectorized UDF: Scalable Analysis with Python and PySpark with Li Jin
Databricks
 
Pandas UDF: Scalable Analysis with Python and PySpark
Pandas UDF: Scalable Analysis with Python and PySparkPandas UDF: Scalable Analysis with Python and PySpark
Pandas UDF: Scalable Analysis with Python and PySpark
Li Jin
 
How does that PySpark thing work? And why Arrow makes it faster?
How does that PySpark thing work? And why Arrow makes it faster?How does that PySpark thing work? And why Arrow makes it faster?
How does that PySpark thing work? And why Arrow makes it faster?
Rubén Berenguel
 
Improving Python and Spark Performance and Interoperability with Apache Arrow...
Improving Python and Spark Performance and Interoperability with Apache Arrow...Improving Python and Spark Performance and Interoperability with Apache Arrow...
Improving Python and Spark Performance and Interoperability with Apache Arrow...
Databricks
 
Speeding up PySpark with Arrow
Speeding up PySpark with ArrowSpeeding up PySpark with Arrow
Speeding up PySpark with Arrow
Rubén Berenguel
 
Getting The Best Performance With PySpark
Getting The Best Performance With PySparkGetting The Best Performance With PySpark
Getting The Best Performance With PySpark
Spark Summit
 
Improving Python and Spark Performance and Interoperability with Apache Arrow
Improving Python and Spark Performance and Interoperability with Apache ArrowImproving Python and Spark Performance and Interoperability with Apache Arrow
Improving Python and Spark Performance and Interoperability with Apache Arrow
Julien Le Dem
 
Life of PySpark - A tale of two environments
Life of PySpark - A tale of two environmentsLife of PySpark - A tale of two environments
Life of PySpark - A tale of two environments
Shankar M S
 
Big data beyond the JVM - DDTX 2018
Big data beyond the JVM -  DDTX 2018Big data beyond the JVM -  DDTX 2018
Big data beyond the JVM - DDTX 2018
Holden Karau
 
Big Data Beyond the JVM - Strata San Jose 2018
Big Data Beyond the JVM - Strata San Jose 2018Big Data Beyond the JVM - Strata San Jose 2018
Big Data Beyond the JVM - Strata San Jose 2018
Holden Karau
 
Making the big data ecosystem work together with python apache arrow, spark,...
Making the big data ecosystem work together with python  apache arrow, spark,...Making the big data ecosystem work together with python  apache arrow, spark,...
Making the big data ecosystem work together with python apache arrow, spark,...
Holden Karau
 
Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Holden Karau
 
Frustration-Reduced PySpark: Data engineering with DataFrames
Frustration-Reduced PySpark: Data engineering with DataFramesFrustration-Reduced PySpark: Data engineering with DataFrames
Frustration-Reduced PySpark: Data engineering with DataFrames
Ilya Ganelin
 
Accelerating Big Data beyond the JVM - Fosdem 2018
Accelerating Big Data beyond the JVM - Fosdem 2018Accelerating Big Data beyond the JVM - Fosdem 2018
Accelerating Big Data beyond the JVM - Fosdem 2018
Holden Karau
 
The Nitty Gritty of Advanced Analytics Using Apache Spark in Python
The Nitty Gritty of Advanced Analytics Using Apache Spark in PythonThe Nitty Gritty of Advanced Analytics Using Apache Spark in Python
The Nitty Gritty of Advanced Analytics Using Apache Spark in Python
Miklos Christine
 
Improving Pandas and PySpark interoperability with Apache Arrow
Improving Pandas and PySpark interoperability with Apache ArrowImproving Pandas and PySpark interoperability with Apache Arrow
Improving Pandas and PySpark interoperability with Apache Arrow
Li Jin
 
Improving Pandas and PySpark performance and interoperability with Apache Arrow
Improving Pandas and PySpark performance and interoperability with Apache ArrowImproving Pandas and PySpark performance and interoperability with Apache Arrow
Improving Pandas and PySpark performance and interoperability with Apache Arrow
PyData
 
Future of Pandas - Jeff Reback
Future of Pandas - Jeff RebackFuture of Pandas - Jeff Reback
Future of Pandas - Jeff Reback
Two Sigma
 
Future of pandas
Future of pandasFuture of pandas
Future of pandas
Jeff Reback
 
Python Data Wrangling: Preparing for the Future
Python Data Wrangling: Preparing for the FuturePython Data Wrangling: Preparing for the Future
Python Data Wrangling: Preparing for the Future
Wes McKinney
 
Vectorized UDF: Scalable Analysis with Python and PySpark with Li Jin
Vectorized UDF: Scalable Analysis with Python and PySpark with Li JinVectorized UDF: Scalable Analysis with Python and PySpark with Li Jin
Vectorized UDF: Scalable Analysis with Python and PySpark with Li Jin
Databricks
 
Pandas UDF: Scalable Analysis with Python and PySpark
Pandas UDF: Scalable Analysis with Python and PySparkPandas UDF: Scalable Analysis with Python and PySpark
Pandas UDF: Scalable Analysis with Python and PySpark
Li Jin
 
How does that PySpark thing work? And why Arrow makes it faster?
How does that PySpark thing work? And why Arrow makes it faster?How does that PySpark thing work? And why Arrow makes it faster?
How does that PySpark thing work? And why Arrow makes it faster?
Rubén Berenguel
 
Improving Python and Spark Performance and Interoperability with Apache Arrow...
Improving Python and Spark Performance and Interoperability with Apache Arrow...Improving Python and Spark Performance and Interoperability with Apache Arrow...
Improving Python and Spark Performance and Interoperability with Apache Arrow...
Databricks
 
Speeding up PySpark with Arrow
Speeding up PySpark with ArrowSpeeding up PySpark with Arrow
Speeding up PySpark with Arrow
Rubén Berenguel
 
Getting The Best Performance With PySpark
Getting The Best Performance With PySparkGetting The Best Performance With PySpark
Getting The Best Performance With PySpark
Spark Summit
 
Improving Python and Spark Performance and Interoperability with Apache Arrow
Improving Python and Spark Performance and Interoperability with Apache ArrowImproving Python and Spark Performance and Interoperability with Apache Arrow
Improving Python and Spark Performance and Interoperability with Apache Arrow
Julien Le Dem
 
Life of PySpark - A tale of two environments
Life of PySpark - A tale of two environmentsLife of PySpark - A tale of two environments
Life of PySpark - A tale of two environments
Shankar M S
 
Big data beyond the JVM - DDTX 2018
Big data beyond the JVM -  DDTX 2018Big data beyond the JVM -  DDTX 2018
Big data beyond the JVM - DDTX 2018
Holden Karau
 
Big Data Beyond the JVM - Strata San Jose 2018
Big Data Beyond the JVM - Strata San Jose 2018Big Data Beyond the JVM - Strata San Jose 2018
Big Data Beyond the JVM - Strata San Jose 2018
Holden Karau
 
Making the big data ecosystem work together with python apache arrow, spark,...
Making the big data ecosystem work together with python  apache arrow, spark,...Making the big data ecosystem work together with python  apache arrow, spark,...
Making the big data ecosystem work together with python apache arrow, spark,...
Holden Karau
 
Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Making the big data ecosystem work together with Python & Apache Arrow, Apach...Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Making the big data ecosystem work together with Python & Apache Arrow, Apach...
Holden Karau
 
Frustration-Reduced PySpark: Data engineering with DataFrames
Frustration-Reduced PySpark: Data engineering with DataFramesFrustration-Reduced PySpark: Data engineering with DataFrames
Frustration-Reduced PySpark: Data engineering with DataFrames
Ilya Ganelin
 
Accelerating Big Data beyond the JVM - Fosdem 2018
Accelerating Big Data beyond the JVM - Fosdem 2018Accelerating Big Data beyond the JVM - Fosdem 2018
Accelerating Big Data beyond the JVM - Fosdem 2018
Holden Karau
 
The Nitty Gritty of Advanced Analytics Using Apache Spark in Python
The Nitty Gritty of Advanced Analytics Using Apache Spark in PythonThe Nitty Gritty of Advanced Analytics Using Apache Spark in Python
The Nitty Gritty of Advanced Analytics Using Apache Spark in Python
Miklos Christine
 

More from Spark Summit (20)

FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
Spark Summit
 
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
Spark Summit
 
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang WuApache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Spark Summit
 
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data  with Ramya RaghavendraImproving Traffic Prediction Using Weather Data  with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Spark Summit
 
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
Spark Summit
 
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
Spark Summit
 
Next CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub WozniakNext CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub Wozniak
Spark Summit
 
Powering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin KimPowering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin Kim
Spark Summit
 
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraImproving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Spark Summit
 
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Spark Summit
 
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
Spark Summit
 
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spark Summit
 
Goal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim SimeonovGoal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim Simeonov
Spark Summit
 
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Spark Summit
 
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir VolkGetting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Spark Summit
 
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Spark Summit
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
FPGA-Based Acceleration Architecture for Spark SQL Qi Xie and Quanfu Wang
Spark Summit
 
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
VEGAS: The Missing Matplotlib for Scala/Apache Spark with DB Tsai and Roger M...
Spark Summit
 
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang WuApache Spark Structured Streaming Helps Smart Manufacturing with  Xiaochang Wu
Apache Spark Structured Streaming Helps Smart Manufacturing with Xiaochang Wu
Spark Summit
 
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data  with Ramya RaghavendraImproving Traffic Prediction Using Weather Data  with Ramya Raghavendra
Improving Traffic Prediction Using Weather Data with Ramya Raghavendra
Spark Summit
 
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
A Tale of Two Graph Frameworks on Spark: GraphFrames and Tinkerpop OLAP Artem...
Spark Summit
 
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
No More Cumbersomeness: Automatic Predictive Modeling on Apache Spark Marcin ...
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
Apache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim DowlingApache Spark and Tensorflow as a Service with Jim Dowling
Apache Spark and Tensorflow as a Service with Jim Dowling
Spark Summit
 
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
MMLSpark: Lessons from Building a SparkML-Compatible Machine Learning Library...
Spark Summit
 
Next CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub WozniakNext CERN Accelerator Logging Service with Jakub Wozniak
Next CERN Accelerator Logging Service with Jakub Wozniak
Spark Summit
 
Powering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin KimPowering a Startup with Apache Spark with Kevin Kim
Powering a Startup with Apache Spark with Kevin Kim
Spark Summit
 
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraImproving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Improving Traffic Prediction Using Weather Datawith Ramya Raghavendra
Spark Summit
 
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Hiding Apache Spark Complexity for Fast Prototyping of Big Data Applications—...
Spark Summit
 
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...How Nielsen Utilized Databricks for Large-Scale Research and Development with...
How Nielsen Utilized Databricks for Large-Scale Research and Development with...
Spark Summit
 
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spline: Apache Spark Lineage not Only for the Banking Industry with Marek Nov...
Spark Summit
 
Goal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim SimeonovGoal Based Data Production with Sim Simeonov
Goal Based Data Production with Sim Simeonov
Spark Summit
 
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Preventing Revenue Leakage and Monitoring Distributed Systems with Machine Le...
Spark Summit
 
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir VolkGetting Ready to Use Redis with Apache Spark with Dvir Volk
Getting Ready to Use Redis with Apache Spark with Dvir Volk
Spark Summit
 
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Deduplication and Author-Disambiguation of Streaming Records via Supervised M...
Spark Summit
 
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
MatFast: In-Memory Distributed Matrix Computation Processing and Optimization...
Spark Summit
 

Recently uploaded (20)

语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上
语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上
语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上
JunZhao68
 
Understanding LLM Temperature: A comprehensive Guide
Understanding LLM Temperature: A comprehensive GuideUnderstanding LLM Temperature: A comprehensive Guide
Understanding LLM Temperature: A comprehensive Guide
Tamanna36
 
Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]
Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]
Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]
Designer
 
this is the Dr. ibrahim presentations ppt.pptx
this is the Dr. ibrahim presentations ppt.pptxthis is the Dr. ibrahim presentations ppt.pptx
this is the Dr. ibrahim presentations ppt.pptx
ibrahimabdi22
 
Ppt. AP Bio_ Lecture Presentation Ch. 09.ppt
Ppt. AP Bio_ Lecture Presentation Ch. 09.pptPpt. AP Bio_ Lecture Presentation Ch. 09.ppt
Ppt. AP Bio_ Lecture Presentation Ch. 09.ppt
jimmygoat123456789
 
apidays New York 2025 - How AI is Transforming Product Management by Shereen ...
apidays New York 2025 - How AI is Transforming Product Management by Shereen ...apidays New York 2025 - How AI is Transforming Product Management by Shereen ...
apidays New York 2025 - How AI is Transforming Product Management by Shereen ...
apidays
 
apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)
apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)
apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)
apidays
 
artificial intelligence (1).pptx hgggfcgfch
artificial intelligence (1).pptx hgggfcgfchartificial intelligence (1).pptx hgggfcgfch
artificial intelligence (1).pptx hgggfcgfch
DevAnshGupta609215
 
Lec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvh
Lec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvhLec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvh
Lec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvh
saifalroby72
 
IST606_SecurityManagement-slides_ 4 pdf
IST606_SecurityManagement-slides_ 4  pdfIST606_SecurityManagement-slides_ 4  pdf
IST606_SecurityManagement-slides_ 4 pdf
nwanjamakane
 
Blue Dark Professional Geometric Business Project Presentation .pdf
Blue Dark Professional Geometric Business Project Presentation .pdfBlue Dark Professional Geometric Business Project Presentation .pdf
Blue Dark Professional Geometric Business Project Presentation .pdf
mohammadhaidarayoobi
 
Data Analytics and visualization-PowerBi
Data Analytics and visualization-PowerBiData Analytics and visualization-PowerBi
Data Analytics and visualization-PowerBi
Krishnapriya975316
 
apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)
apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)
apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)
apidays
 
Faces of the Future The Impact of a Data Science Course in Kerala.pdf
Faces of the Future The Impact of a Data Science Course in Kerala.pdfFaces of the Future The Impact of a Data Science Course in Kerala.pdf
Faces of the Future The Impact of a Data Science Course in Kerala.pdf
jzyphoenix
 
apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...
apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...
apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...
apidays
 
Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...
Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...
Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...
Tamanna36
 
The fundamental concept of nature of knowledge
The fundamental concept of nature of knowledgeThe fundamental concept of nature of knowledge
The fundamental concept of nature of knowledge
tarrebulehora
 
Covid19_Project_ Presentation.pptx
Covid19_Project_       Presentation.pptxCovid19_Project_       Presentation.pptx
Covid19_Project_ Presentation.pptx
pavipraveen37
 
Embracing AI in Project Management: Final Insights & Future Vision
Embracing AI in Project Management: Final Insights & Future VisionEmbracing AI in Project Management: Final Insights & Future Vision
Embracing AI in Project Management: Final Insights & Future Vision
KavehMomeni1
 
GROUP 7 CASE STUDY Real Life Incident.pptx
GROUP 7 CASE STUDY Real Life Incident.pptxGROUP 7 CASE STUDY Real Life Incident.pptx
GROUP 7 CASE STUDY Real Life Incident.pptx
mardoglenn21
 
语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上
语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上
语法专题3-状语从句.pdf 英语语法基础部分,涉及到状语从句部分的内容来米爱上
JunZhao68
 
Understanding LLM Temperature: A comprehensive Guide
Understanding LLM Temperature: A comprehensive GuideUnderstanding LLM Temperature: A comprehensive Guide
Understanding LLM Temperature: A comprehensive Guide
Tamanna36
 
Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]
Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]
Glary Utilities Pro 5.157.0.183 Crack + Key Download [Latest]
Designer
 
this is the Dr. ibrahim presentations ppt.pptx
this is the Dr. ibrahim presentations ppt.pptxthis is the Dr. ibrahim presentations ppt.pptx
this is the Dr. ibrahim presentations ppt.pptx
ibrahimabdi22
 
Ppt. AP Bio_ Lecture Presentation Ch. 09.ppt
Ppt. AP Bio_ Lecture Presentation Ch. 09.pptPpt. AP Bio_ Lecture Presentation Ch. 09.ppt
Ppt. AP Bio_ Lecture Presentation Ch. 09.ppt
jimmygoat123456789
 
apidays New York 2025 - How AI is Transforming Product Management by Shereen ...
apidays New York 2025 - How AI is Transforming Product Management by Shereen ...apidays New York 2025 - How AI is Transforming Product Management by Shereen ...
apidays New York 2025 - How AI is Transforming Product Management by Shereen ...
apidays
 
apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)
apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)
apidays New York 2025 - The Evolution of Travel APIs by Eric White (Eviivo)
apidays
 
artificial intelligence (1).pptx hgggfcgfch
artificial intelligence (1).pptx hgggfcgfchartificial intelligence (1).pptx hgggfcgfch
artificial intelligence (1).pptx hgggfcgfch
DevAnshGupta609215
 
Lec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvh
Lec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvhLec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvh
Lec 12.pdfghhjjhhjkkkkkkkkkkkjfcvhiiugcvvh
saifalroby72
 
IST606_SecurityManagement-slides_ 4 pdf
IST606_SecurityManagement-slides_ 4  pdfIST606_SecurityManagement-slides_ 4  pdf
IST606_SecurityManagement-slides_ 4 pdf
nwanjamakane
 
Blue Dark Professional Geometric Business Project Presentation .pdf
Blue Dark Professional Geometric Business Project Presentation .pdfBlue Dark Professional Geometric Business Project Presentation .pdf
Blue Dark Professional Geometric Business Project Presentation .pdf
mohammadhaidarayoobi
 
Data Analytics and visualization-PowerBi
Data Analytics and visualization-PowerBiData Analytics and visualization-PowerBi
Data Analytics and visualization-PowerBi
Krishnapriya975316
 
apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)
apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)
apidays New York 2025 - From UX to AX by Karin Hendrikse (Netlify)
apidays
 
Faces of the Future The Impact of a Data Science Course in Kerala.pdf
Faces of the Future The Impact of a Data Science Course in Kerala.pdfFaces of the Future The Impact of a Data Science Course in Kerala.pdf
Faces of the Future The Impact of a Data Science Course in Kerala.pdf
jzyphoenix
 
apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...
apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...
apidays New York 2025 - Turn API Chaos Into AI-Powered Growth by Jeremy Water...
apidays
 
Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...
Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...
Understanding Large Language Model Hallucinations: Exploring Causes, Detectio...
Tamanna36
 
The fundamental concept of nature of knowledge
The fundamental concept of nature of knowledgeThe fundamental concept of nature of knowledge
The fundamental concept of nature of knowledge
tarrebulehora
 
Covid19_Project_ Presentation.pptx
Covid19_Project_       Presentation.pptxCovid19_Project_       Presentation.pptx
Covid19_Project_ Presentation.pptx
pavipraveen37
 
Embracing AI in Project Management: Final Insights & Future Vision
Embracing AI in Project Management: Final Insights & Future VisionEmbracing AI in Project Management: Final Insights & Future Vision
Embracing AI in Project Management: Final Insights & Future Vision
KavehMomeni1
 
GROUP 7 CASE STUDY Real Life Incident.pptx
GROUP 7 CASE STUDY Real Life Incident.pptxGROUP 7 CASE STUDY Real Life Incident.pptx
GROUP 7 CASE STUDY Real Life Incident.pptx
mardoglenn21
 

Improving Python and Spark Performance and Interoperability: Spark Summit East talk by: Wes McKinney

  • 1. www.twosigma.com Improving Python and Spark Performance and Interoperability February 9, 2017All Rights Reserved Wes McKinney @wesmckinn Spark Summit East 2017 February 9, 2017
  • 2. Me February 9, 2017 •  Currently: Software Architect at Two Sigma Investments •  Creator of Python pandas project •  PMC member for Apache Arrow and Apache Parquet •  Other Python projects: Ibis, Feather, statsmodels •  Formerly: Cloudera, DataPad, AQR •  Author of Python for Data Analysis All Rights Reserved 2
  • 3. Important Legal Information The information presented here is offered for informational purposes only and should not be used for any other purpose (including, without limitation, the making of investment decisions). Examples provided herein are for illustrative purposes only and are not necessarily based on actual data. Nothing herein constitutes: an offer to sell or the solicitation of any offer to buy any security or other interest; tax advice; or investment advice. This presentation shall remain the property of Two Sigma Investments, LP (“Two Sigma”) and Two Sigma reserves the right to require the return of this presentation at any time. Some of the images, logos or other material used herein may be protected by copyright and/or trademark. If so, such copyrights and/or trademarks are most likely owned by the entity that created the material and are used purely for identification and comment as fair use under international copyright and/or trademark laws. Use of such image, copyright or trademark does not imply any association with such organization (or endorsement of such organization) by Two Sigma, nor vice versa. Copyright © 2017 TWO SIGMA INVESTMENTS, LP. All rights reserved
  • 4. This talk 4February 9, 2017 •  Why some parts of PySpark are “slow” •  Technology that can help make things faster •  Work we have done to make improvements •  Future roadmap All Rights Reserved
  • 5. Python and Spark February 9, 2017 •  Spark is implemented in Scala, runs on the Java virtual machine (JVM) •  Spark has Python and R APIs with partial or full coverage for many parts of the Scala Spark API •  In some Spark tasks, Python is only a scripting front-end. •  This means no interpreted Python code is executed once the Spark job starts •  Other PySpark jobs suffer performance and interoperability issues that we’re going to analyze in this talk All Rights Reserved 5
  • 6. Spark DataFrame performance February 9, 2017All Rights Reserved Source: https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html 6
  • 7. Spark DataFrame performance can be misleading February 9, 2017 •  Spark DataFrames are an example of Python as a DSL / scripting front end •  Excepting UDFs (.map(…) or sqlContext.registerFunction), no Python code is evaluated in the Spark job •  Python API calls create SQL query plans inside the JVM — so Scala and Python versions are computationally identical All Rights Reserved 7
  • 8. Spark DataFrames as deferred DSL February 9, 2017 young = users[users.age < 21] young.groupBy(“gender”).count() All Rights Reserved 8
  • 9. Spark DataFrames as deferred DSL February 9, 2017 SELECT gender, COUNT(*) FROM users WHERE age < 21 GROUP BY 1 All Rights Reserved 9
  • 10. Spark DataFrames as deferred DSL February 9, 2017 Aggregation[table] table: Table: users metrics: count = Count[int64] Table: ref_0 by: gender = Column[array(string)] 'gender' from users predicates: Less[array(boolean)] age = Column[array(int32)] 'age' from users Literal[int8] 21 All Rights Reserved 10
  • 11. Where Python code and Spark meet February 9, 2017 •  Unfortunately, many PySpark jobs cannot be expressed entirely as DataFrame operations or other built-in Scala constructs •  Spark-Scala interacts with in-memory Python in key ways: •  Reading and writing in-memory datasets to/from the Spark driver •  Evaluating custom Python code (user-defined functions) All Rights Reserved 11
  • 12. How PySpark lambda functions work February 9, 2017 •  The anatomy of All Rights Reserved rdd.map(lambda x: … ) df.withColumn(py_func(...)) Scala RDD Python worker Python worker Python worker Python worker Python worker see PythonRDD.scala 12
  • 13. PySpark lambda performance problems February 9, 2017 •  See 2016 talk “High Performance Python on Apache Spark” •  http://www.slideshare.net/wesm/high-performance-python-on-apache-spark •  Problems •  Inefficient data movement (serialization / deserialization) •  Scalar computation model: object boxing and interpreter overhead •  General summary: PySpark is not currently designed to achieve high performance in the way that pandas and NumPy are. All Rights Reserved 13
  • 14. Other issues with PySpark lambdas February 9, 2017 •  Computation model unlike what pandas users are used to •  In dataframe.map(f), the Python function f only sees one Row at a time •  A more natural and efficient vectorized API would be: •  dataframe.map_pandas(lambda df: …) All Rights Reserved 14
  • 15. February 9, 2017All Rights Reserved Apache Arrow 15
  • 16. Apache Arrow: Process and Move Data Fast February 9, 2017 •  New Top-level Apache project as of February 2016 •  Collaboration amongst broad set of OSS projects around shared needs •  Language-independent columnar data structures •  Metadata for describing schemas / chunks of data •  Protocol for moving data between processes with minimal serialization overhead All Rights Reserved 16
  • 17. High performance data interchange February 9, 2017All Rights Reserved Today With Arrow Source: Apache Arrow 17
  • 18. What does Apache Arrow give you? February 9, 2017 •  Zero-copy columnar data: Complex table and array data structures that can reference memory without copying it •  Ultrafast messaging: Language-agnostic metadata, batch/file-based and streaming binary formats •  Complex schema support: Flat and nested data types •  C++, Python, and Java Implementations: with integration tests All Rights Reserved 18
  • 19. Arrow binary wire formats February 9, 2017All Rights Reserved 19
  • 20. Extreme performance to pandas from Arrow streams February 9, 2017All Rights Reserved 20
  • 21. PyArrow file and streaming API February 9, 2017 from pyarrow import StreamReader reader = StreamReader(stream) # pyarrow.Table table = reader.read_all() # Convert to pandas df = table.to_pandas() All Rights Reserved 21
  • 22. For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of this presentation for important disclosure information. Making DataFrame.toPandas faster February 9, 2017 •  Background •  Spark’s toPandas transfers in-memory from the Spark driver to Python and converts it to a pandas.DataFrame. It is very slow •  Joint work with Bryan Cutler (IBM), Li Jin (Two Sigma), and Yin Xusen (IBM). See SPARK-13534 on JIRA •  Test case: transfer 128MB Parquet file with 8 DOUBLE columns All Rights Reserved 22
  • 23. February 9, 2017All Rights Reserved conda install pyarrow -c conda-forge 23
  • 24. For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of this presentation for important disclosure information. Making DataFrame.toPandas faster February 9, 2017All Rights Reserved df = sqlContext.read.parquet('example2.parquet') df = df.cache() df.count() Then %%prun -s cumulative dfs = [df.toPandas() for i in range(5)] 24
  • 25. For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of this presentation for important disclosure information. Making DataFrame.toPandas faster February 9, 2017All Rights Reserved 94483943 function calls (94478223 primitive calls) in 62.492 seconds ncalls tottime percall cumtime percall filename:lineno(function) 5 1.458 0.292 62.492 12.498 dataframe.py:1570(toPandas) 5 0.661 0.132 54.759 10.952 dataframe.py:382(collect) 10485765 0.669 0.000 46.823 0.000 rdd.py:121(_load_from_socket) 715 0.002 0.000 46.139 0.065 serializers.py:141(load_stream) 710 0.002 0.000 45.950 0.065 serializers.py:448(loads) 10485760 4.969 0.000 32.853 0.000 types.py:595(fromInternal) 1391 0.004 0.000 7.445 0.005 socket.py:562(readinto) 18 0.000 0.000 7.283 0.405 java_gateway.py:1006(send_command) 5 0.000 0.000 6.262 1.252 frame.py:943(from_records) 25
  • 26. For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of this presentation for important disclosure information. Making DataFrame.toPandas faster February 9, 2017All Rights Reserved Now, using pyarrow %%prun -s cumulative dfs = [df.toPandas(useArrow) for i in range(5)] 26
  • 27. For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of this presentation for important disclosure information. Making DataFrame.toPandas faster February 9, 2017All Rights Reserved 38585 function calls (38535 primitive calls) in 9.448 seconds ncalls tottime percall cumtime percall filename:lineno(function) 5 0.001 0.000 9.448 1.890 dataframe.py:1570(toPandas) 5 0.000 0.000 9.358 1.872 dataframe.py:394(collectAsArrow) 6271 9.330 0.001 9.330 0.001 {method 'recv_into' of '_socket.socket‘} 15 0.000 0.000 9.229 0.615 java_gateway.py:860(send_command) 10 0.000 0.000 0.123 0.012 serializers.py:141(load_stream) 5 0.085 0.017 0.089 0.018 {method 'to_pandas' of 'pyarrow.Table‘} 27
  • 28. pip install memory_profiler February 9, 2017All Rights Reserved %%memit -i 0.0001 pdf = None pdf = df.toPandas() gc.collect() peak memory: 1223.16 MiB, increment: 1018.20 MiB 28
  • 29. Plot thickens: memory use February 9, 2017All Rights Reserved %%memit -i 0.0001 pdf = None pdf = df.toPandas(useArrow=True) gc.collect() peak memory: 334.08 MiB, increment: 258.31 MiB 29
  • 30. Summary of results February 9, 2017 •  Current version: average 12.5s (10.2 MB/s) •  Deseralization accounts for 88% of time; the rest is waiting for Spark to send the data •  Peak memory use 8x (~1GB) the size of the dataset •  Arrow version •  Average wall clock time of 1.89s (6.61x faster, 67.7 MB/s) •  Deserialization accounts for 1% of total time •  Peak memory use 2x the size of the dataset (1 memory doubling) •  Time for Spark to send data 25% higher (1866ms vs 1488 ms) All Rights Reserved 30
  • 31. Aside: reading Parquet directly in Python February 9, 2017 import pyarrow.parquet as pq %%timeit df = pq.read_table(‘example2.parquet’).to_pandas() 10 loops, best of 3: 175 ms per loop All Rights Reserved 31
  • 32. Digging deeper February 9, 2017 •  Why does it take Spark ~1.8 seconds to send 128MB of data over the wire? val collectedRows = queryExecution.executedPlan.executeCollect() cnvtr.internalRowsToPayload(collectedRows, this.schema) All Rights Reserved Array[InternalRow] 32
  • 33. Digging deeper February 9, 2017 •  In our 128MB test case, on average: •  75% of time is being spent collecting Array[InternalRow] from the task executors •  25% of the time is spent on a single-threaded conversion of all the data from Array[InternalRow] to ArrowRecordBatch •  We can go much faster by performing the Spark SQL -> Arrow conversion locally on the task executors, then streaming the batches to Python All Rights Reserved 33
  • 34. Future architecture February 9, 2017All Rights Reserved Task executor Task executor Task executor Task executor Arrow RecordBatch Arrow RecordBatch Arrow RecordBatch Arrow RecordBatch Arrow Schema Spark driver Python 34
  • 35. For illustration purposes only. Not an offer to buy or sell securities. Two Sigma may modify its investment approach and portfolio parameters in the future in any manner that it believes is consistent with its fiduciary duty to its clients. There is no guarantee that Two Sigma or its products will be successful in achieving any or all of their investment objectives. Moreover, all investments involve some degree of risk, not all of which will be successfully mitigated. Please see the last page of this presentation for important disclosure information. Hot off the presses February 9, 2017All Rights Reserved ncalls tottime percall cumtime percall filename:lineno(function) 5 0.000 0.000 5.928 1.186 dataframe.py:1570(toPandas) 5 0.000 0.000 5.838 1.168 dataframe.py:394(collectAsArrow) 5919 0.005 0.000 5.824 0.001 socket.py:561(readinto) 5919 5.809 0.001 5.809 0.001 {method 'recv_into' of '_socket.socket‘} ... 5 0.086 0.017 0.091 0.018 {method 'to_pandas' of 'pyarrow.Table‘} 35 Patch from February 8: 38% perf improvement
  • 36. The work ahead February 9, 2017 •  Luckily, speeding up toPandas and speeding up Lambda / UDF functions is architecturally the same type of problem •  Reasonably clear path to making toPandas even faster •  How can you get involved? •  Keep an eye on Spark ASF JIRA •  Contribute to Apache Arrow (Java, C++, Python, other languages) •  Join the Arrow and Spark mailing lists All Rights Reserved 36
  • 37. Thank you February 9, 2017 •  Bryan Cutler, Li Jin, and Yin Xusen, for building the integration Spark-Arrow integration •  Apache Arrow community •  Spark Summit organizers •  Two Sigma and IBM, for supporting this work All Rights Reserved 37