SlideShare a Scribd company logo
IBM SparkTechnology Center
Apache Big Data Seville 2016
Apache Bahir
Writing Applications using Apache Bahir
Luciano Resende
IBM | Spark Technology Center
IBM SparkTechnology Center
About Me
Luciano Resende (lresende@apache.org)
• Architect and community liaison at IBM – Spark Technology Center
• Have been contributing to open source at ASF for over 10 years
• Currently contributing to : Apache Bahir, Apache Spark, Apache Zeppelin and
Apache SystemML (incubating) projects
2
@lresende1975 http://lresende.blogspot.com/ https://www.linkedin.com/in/lresendehttp://slideshare.net/luckbr1975lresende
IBM SparkTechnology Center
Origins of the Apache Bahir Project
MAY/2016: Established as a top-level Apache Project.
• PMC formed by Apache Spark committers/pmc, Apache Members
• Initial contributions imported from Apache Spark
AUG/2016: Flink community join Apache Bahir
• Initial contributions of Flink extensions
• In October 2016 Robert Metzger elected committer
IBM SparkTechnology Center
The Apache Bahir name
Naming an Apache Project is a science !!!
• We needed a name that wasn’t used yet
• Needed to be related to Spark
We ended up with : Bahir
• A name of Arabian origin that means Sparkling,
• Also associated with a guy who succeeds at everything
4
IBM SparkTechnology Center
Why Apache Bahir
It’s an Apache project
• And if you are here, you know what it means
What are the benefits of curating your extensions at Apache Bahir
• Apache Governance
• Apache License
• Apache Community
• Apache Brand
5
IBM SparkTechnology Center
Why Apache Bahir
Flexibility
• Release flexibility
• Bounded to platform or component release
Shared infrastructure
• Release, CI, etc
Shared knowledge
• Collaborate with experts on both platform and component areas
6
IBM SparkTechnology Center
Apache Spark
7
IBM SparkTechnology Center
Apache Spark - Introduction
What is Apache Spark ?
8
Spark Core
Spark
SQL
Spark
Streaming
Spark
ML
Spark
GraphX
executes	SQL	
statements
performs	
streaming	
analytics	using	
micro-batches	
common	
machine	
learning	and	
statistical	
algorithms
distributed	
graph	
processing	
framework
general	compute	engine,	handles	
distributed	task	dispatching,	
scheduling	and	basic	I/O	functions
large	variety	of	data	sources	and	
formats	can	be	supported,	both	on-
premise	or	cloud
BigInsights	
(HDFS)
Cloudant
dashDB
SQL	DB
IBM SparkTechnology Center
Apache Spark – Spark SQL
9
Spark Core
Spark
SQL
Spark
Streaming
Spark
ML
Spark
GraphX
▪Unified data access: Query structured
data sets with SQL or
Dataset/DataFrame APIs
▪Fast, familiar query language across all
of your enterprise dataRDBMS
Data Sources
Structured
Streaming
Data Sources
IBM SparkTechnology Center
Apache Spark – Spark SQL
You can run SQL statement with SparkSession.sql(…) interface:
val spark = SparkSession.builder()
.appName(“Demo”)
.getOrCreate()
spark.sql(“create table T1 (c1 int, c2 int) stored as parquet”)
val ds = spark.sql(“select * from T1”)
You can further transform the resultant dataset:
val ds1 = ds.groupBy(“c1”).agg(“c2”-> “sum”)
val ds2 = ds.orderBy(“c1”)
The result is a DataFrame / Dataset[Row]
ds.show() displays the rows
10
IBM SparkTechnology Center
Apache Spark – Spark SQL
You can read from data sources using SparkSession.read.format(…)
val spark = SparkSession.builder()
.appName(“Demo”)
.getOrCreate()
case class Bank(age: Integer, job: String, marital: String, education: String, balance: Integer)
// loading csv data to a Dataset of Bank type
val bankFromCSV = spark.read.csv(“hdfs://localhost:9000/data/bank.csv").as[Bank]
// loading JSON data to a Dataset of Bank type
val bankFromJSON = spark.read.json(“hdfs://localhost:9000/data/bank.json").as[Bank]
// select a column value from the Dataset
bankFromCSV.select(‘age).show() will return all rows of column “age” from this dataset.
11
IBM SparkTechnology Center
Apache Spark – Spark SQL
You can also configure a specific data source with specific options
val spark = SparkSession.builder()
.appName(“Demo”)
.getOrCreate()
case class Bank(age: Integer, job: String, marital: String, education: String, balance: Integer)
// loading csv data to a Dataset of Bank type
val bankFromCSV = sparkSession.read
.option("header", ”true") // Use first line of all files as header
.option("inferSchema", ”true") // Automatically infer data types
.option("delimiter", " ")
.csv("/users/lresende/data.csv”)
.as[Bank]
bankFromCSV.select(‘age).show() // will return all rows of column “age” from this dataset.
12
IBM SparkTechnology Center
Apache Spark – Spark SQL
Data Sources under the covers
• Data source registration (e.g. spark.read.datasource)
• Provide BaseRelation implementation
• That implements support for table scans:
• TableScans, PrunedScan, PrunedFilteredScan, CatalystScan
• Detailed information available at
• http://www.spark.tc/exploring-the-apache-spark-datasource-api/
13
IBM SparkTechnology Center
Apache Spark – Spark SQL Structured Streaming
Unified programming model for streaming, interactive and batch queries
14
Image source: https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
Considers the data stream as unbounded table
IBM SparkTechnology Center
Apache Spark – Spark SQL Structured Streaming
SQL regular APIs
val spark = SparkSession.builder()
.appName(“Demo”)
.getOrCreate()
val input = spark.read
.schema(schema)
.format(”csv")
.load(”input-path")
val result = input
.select(”age”)
.where(”age > 18”)
result.write
.format(”json”)
. save(” dest-path”)
15
Structured Streaming APIs
val spark = SparkSession.builder()
.appName(“Demo”)
.getOrCreate()
val input = spark.readStream
.schema(schema)
.format(”csv")
.load(”input-path")
val result = input
.select(”age”)
.where(”age > 18”)
result.write
.format(”json”)
. startStream(” dest-path”)
IBM SparkTechnology Center
Apache Spark – Spark SQL Structured Streaming
16
Structured Streaming is an ALPHA feature
IBM SparkTechnology Center
Apache Spark – Spark Streaming
17
Spark Core
Spark
Streaming
Spark
SQL
Spark
ML
Spark
GraphX
▪Micro-batch event processing for near-
real time analytics
▪e.g. Internet of Things (IoT) devices,
Twitter feeds, Kafka (event hub), etc.
▪No multi-threading or parallel process
programming required
IBM SparkTechnology Center
Apache Spark – Spark Streaming
Also known as discretized stream or Dstream
Abstracts a continuous stream of data
Based on micro-batching
18
IBM SparkTechnology Center
Apache Spark – Spark Streaming
val sparkConf = new SparkConf()
.setAppName("MQTTWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(2))
val lines = MQTTUtils.createStream(ssc, brokerUrl, topic, StorageLevel.MEMORY_ONLY_SER_2)
val words = lines.flatMap(x => x.split(" "))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
19
IBM SparkTechnology Center
Apache Spark extensions in Bahir
MQTT – Enables reading data from MQTT Servers using Spark Streaming or Structured streaming.
• http://bahir.apache.org/docs/spark/current/spark-sql-streaming-mqtt/
• http://bahir.apache.org/docs/spark/current/spark-streaming-mqtt/
Twitter – Enables reading social data from twitter using Spark Streaming.
• http://bahir.apache.org/docs/spark/current/spark-streaming-twitter/
Akka – Enables reading data from Akka Actors using Spark Streaming.
• http://bahir.apache.org/docs/spark/current/spark-streaming-akka/
ZeroMQ – Enables reading data from ZeroMQ using Spark Streaming.
• http://bahir.apache.org/docs/spark/current/spark-streaming-zeromq/
20
IBM SparkTechnology Center
Apache Spark extensions coming soon to Bahir
WebHDFS – Enables reading data from remote HDFS file system utilizing Spark SQL APIs
• https://issues.apache.org/jira/browse/BAHIR-67
CounchDB / Cloudant– Enables reading data from CounchDB NoSQL document stores using Spark SQL APIs
21
IBM SparkTechnology Center
Apache Spark extensions in Bahir
Adding Bahir extensions into your application
• Using SBT
• libraryDependencies += "org.apache.bahir" %% "spark-streaming-mqtt" % "2.1.0-SNAPSHOT”
• Using Maven
• <dependency>
<groupId>org.apache.bahir</groupId>
<artifactId>spark-streaming-mqtt_2.11 </artifactId>
<version>2.1.0-SNAPSHOT</version>
</dependency>
22
IBM SparkTechnology Center
Apache Spark extensions in Bahir
Submitting applications with Bahir extensions to Spark
• Spark-shell
• bin/spark-shell --packages org.apache.bahir:spark-streaming_mqtt_2.11:2.1.0-SNAPSHOT …..
• Spark-submit
• bin/spark-submit --packages org.apache.bahir:spark-streaming_mqtt_2.11:2.1.0-SNAPSHOT …..
23
IBM SparkTechnology Center
Apache Flink
24
IBM SparkTechnology Center
Apache Flink extensions in Bahir
Flink platform extensions added recently
• https://github.com/apache/bahir-flink
First release coming soon
• Release discussions have started
• Finishing up some basic documentation and examples
• Should be available soon
25
IBM SparkTechnology Center
Apache Flink extensions in Bahir
ActiveMQ – Enables reading and publishing data from ActiveMQ servers
• https://github.com/apache/bahir-flink/blob/master/flink-connector-activemq/README.md
Flume– Enables publishing data to Apache Flume
• https://github.com/apache/bahir-flink/tree/master/flink-connector-flume
Redis – Enables reading data to Redis and publishing data to Redis PubSub
• https://github.com/apache/bahir-flink/blob/master/flink-connector-redis/README.md
26
IBM SparkTechnology Center
Live Demo
27
IBM SparkTechnology Center
IoT Simulation using MQTT
The demo environment
https://github.com/lresende/bahir-iot-demo
28
Docker environment
Mosquitto MQTT Server
Node.js Webapplication
Simulates Elevator IoT devices
Elevator simulator Metrics:
- Weight
- Speed
- Power
- Temperature
- System
IBM SparkTechnology Center
Join the Apache Bahir community !!!
29
IBM SparkTechnology Center
References
Apache Bahir
http://bahir.apache.org
Documentation for Apache Spark extensions
http://bahir.apache.org/docs/spark/current/documentation/
Source Repositories
https://github.com/apache/bahir
https://github.com/apache/bahir-flink
https://github.com/apache/bahir-website
30
Image source: http://az616578.vo.msecnd.net/files/2016/03/21/6359412499310138501557867529_thank-you-1400x800-c-default.gif

More Related Content

What's hot (20)

PDF
Transitioning Compute Models: Hadoop MapReduce to Spark
Slim Baltagi
 
PDF
An Insider’s Guide to Maximizing Spark SQL Performance
Takuya UESHIN
 
PDF
Data science lifecycle with Apache Zeppelin
DataWorks Summit/Hadoop Summit
 
PPTX
Introduction to Apache Spark
Rahul Jain
 
PDF
Spark Summit EU talk by Yiannis Gkoufas
Spark Summit
 
PDF
What No One Tells You About Writing a Streaming App: Spark Summit East talk b...
Spark Summit
 
PPTX
Open Source Ingredients for Interactive Data Analysis in Spark by Maxim Lukiy...
DataWorks Summit/Hadoop Summit
 
PDF
Archiving, E-Discovery, and Supervision with Spark and Hadoop with Jordan Volz
Databricks
 
PPTX
Apache spark
TEJPAL GAUTAM
 
PDF
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...
Edureka!
 
PDF
Spark mhug2
Joseph Niemiec
 
PDF
Apache Spark At Apple with Sam Maclennan and Vishwanath Lakkundi
Databricks
 
PDF
Spark Tuning For Enterprise System Administrators, Spark Summit East 2016
Anya Bida
 
PDF
Spark Summit EU talk by Tim Hunter
Spark Summit
 
PPTX
Securing Hadoop in an Enterprise Context
DataWorks Summit/Hadoop Summit
 
PDF
Stream All Things—Patterns of Modern Data Integration with Gwen Shapira
Databricks
 
PPTX
Apache Zeppelin Meetup Christian Tzolov 1/21/16
PivotalOpenSourceHub
 
PPTX
JEEConf 2015 - Introduction to real-time big data with Apache Spark
Taras Matyashovsky
 
PPTX
Advanced Visualization of Spark jobs
DataWorks Summit/Hadoop Summit
 
PDF
Jump Start with Apache Spark 2.0 on Databricks
Anyscale
 
Transitioning Compute Models: Hadoop MapReduce to Spark
Slim Baltagi
 
An Insider’s Guide to Maximizing Spark SQL Performance
Takuya UESHIN
 
Data science lifecycle with Apache Zeppelin
DataWorks Summit/Hadoop Summit
 
Introduction to Apache Spark
Rahul Jain
 
Spark Summit EU talk by Yiannis Gkoufas
Spark Summit
 
What No One Tells You About Writing a Streaming App: Spark Summit East talk b...
Spark Summit
 
Open Source Ingredients for Interactive Data Analysis in Spark by Maxim Lukiy...
DataWorks Summit/Hadoop Summit
 
Archiving, E-Discovery, and Supervision with Spark and Hadoop with Jordan Volz
Databricks
 
Apache spark
TEJPAL GAUTAM
 
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...
Edureka!
 
Spark mhug2
Joseph Niemiec
 
Apache Spark At Apple with Sam Maclennan and Vishwanath Lakkundi
Databricks
 
Spark Tuning For Enterprise System Administrators, Spark Summit East 2016
Anya Bida
 
Spark Summit EU talk by Tim Hunter
Spark Summit
 
Securing Hadoop in an Enterprise Context
DataWorks Summit/Hadoop Summit
 
Stream All Things—Patterns of Modern Data Integration with Gwen Shapira
Databricks
 
Apache Zeppelin Meetup Christian Tzolov 1/21/16
PivotalOpenSourceHub
 
JEEConf 2015 - Introduction to real-time big data with Apache Spark
Taras Matyashovsky
 
Advanced Visualization of Spark jobs
DataWorks Summit/Hadoop Summit
 
Jump Start with Apache Spark 2.0 on Databricks
Anyscale
 

Similar to Writing Apache Spark and Apache Flink Applications Using Apache Bahir (20)

PDF
Building iot applications with Apache Spark and Apache Bahir
Luciano Resende
 
PDF
IoT Applications and Patterns using Apache Spark & Apache Bahir
Luciano Resende
 
PDF
Spark Hsinchu meetup
Yung-An He
 
PDF
Operational Tips For Deploying Apache Spark
Databricks
 
PDF
Apache Spark - Dataframes & Spark SQL - Part 2 | Big Data Hadoop Spark Tutori...
CloudxLab
 
PPTX
Spark for big data analytics
Edureka!
 
PDF
Infra space talk on Apache Spark - Into to CASK
Rob Mueller
 
PPTX
Building data pipelines for modern data warehouse with Apache® Spark™ and .NE...
Michael Rys
 
PPTX
Learn Apache Spark: A Comprehensive Guide
Whizlabs
 
PDF
Hands-on Guide to Apache Spark 3: Build Scalable Computing Engines for Batch ...
bemeneqhueen
 
PDF
H2O PySparkling Water
Sri Ambati
 
PDF
20170126 big data processing
Vienna Data Science Group
 
PDF
Media_Entertainment_Veriticals
Peyman Mohajerian
 
PPTX
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Simplilearn
 
PDF
Databricks Meetup @ Los Angeles Apache Spark User Group
Paco Nathan
 
PPTX
5 reasons why spark is in demand!
Edureka!
 
PDF
Streaming Big Data with Spark, Kafka, Cassandra, Akka & Scala (from webinar)
Helena Edelson
 
PPTX
Spark SQL Tutorial | Spark SQL Using Scala | Apache Spark Tutorial For Beginn...
Simplilearn
 
PPTX
Apache Spark in Industry
Dorian Beganovic
 
PDF
Monitor Apache Spark 3 on Kubernetes using Metrics and Plugins
Databricks
 
Building iot applications with Apache Spark and Apache Bahir
Luciano Resende
 
IoT Applications and Patterns using Apache Spark & Apache Bahir
Luciano Resende
 
Spark Hsinchu meetup
Yung-An He
 
Operational Tips For Deploying Apache Spark
Databricks
 
Apache Spark - Dataframes & Spark SQL - Part 2 | Big Data Hadoop Spark Tutori...
CloudxLab
 
Spark for big data analytics
Edureka!
 
Infra space talk on Apache Spark - Into to CASK
Rob Mueller
 
Building data pipelines for modern data warehouse with Apache® Spark™ and .NE...
Michael Rys
 
Learn Apache Spark: A Comprehensive Guide
Whizlabs
 
Hands-on Guide to Apache Spark 3: Build Scalable Computing Engines for Batch ...
bemeneqhueen
 
H2O PySparkling Water
Sri Ambati
 
20170126 big data processing
Vienna Data Science Group
 
Media_Entertainment_Veriticals
Peyman Mohajerian
 
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Simplilearn
 
Databricks Meetup @ Los Angeles Apache Spark User Group
Paco Nathan
 
5 reasons why spark is in demand!
Edureka!
 
Streaming Big Data with Spark, Kafka, Cassandra, Akka & Scala (from webinar)
Helena Edelson
 
Spark SQL Tutorial | Spark SQL Using Scala | Apache Spark Tutorial For Beginn...
Simplilearn
 
Apache Spark in Industry
Dorian Beganovic
 
Monitor Apache Spark 3 on Kubernetes using Metrics and Plugins
Databricks
 
Ad

More from Luciano Resende (20)

PDF
A Jupyter kernel for Scala and Apache Spark.pdf
Luciano Resende
 
PDF
Using Elyra for COVID-19 Analytics
Luciano Resende
 
PDF
Elyra - a set of AI-centric extensions to JupyterLab Notebooks.
Luciano Resende
 
PDF
From Data to AI - Silicon Valley Open Source projects come to you - Madrid me...
Luciano Resende
 
PDF
Ai pipelines powered by jupyter notebooks
Luciano Resende
 
PDF
Strata - Scaling Jupyter with Jupyter Enterprise Gateway
Luciano Resende
 
PDF
Scaling notebooks for Deep Learning workloads
Luciano Resende
 
PDF
Jupyter Enterprise Gateway Overview
Luciano Resende
 
PPTX
Inteligencia artificial, open source e IBM Call for Code
Luciano Resende
 
PDF
Getting insights from IoT data with Apache Spark and Apache Bahir
Luciano Resende
 
PDF
Open Source AI - News and examples
Luciano Resende
 
PDF
Building analytical microservices powered by jupyter kernels
Luciano Resende
 
PDF
An Enterprise Analytics Platform with Jupyter Notebooks and Apache Spark
Luciano Resende
 
PDF
The Analytic Platform behind IBM’s Watson Data Platform - Big Data Spain 2017
Luciano Resende
 
PDF
What's new in Apache SystemML - Declarative Machine Learning
Luciano Resende
 
PDF
Big analytics meetup - Extended Jupyter Kernel Gateway
Luciano Resende
 
PDF
Jupyter con meetup extended jupyter kernel gateway
Luciano Resende
 
PPT
Asf icfoss-mentoring
Luciano Resende
 
PDF
Open Source tools overview
Luciano Resende
 
PDF
Data access layer and schema definitions
Luciano Resende
 
A Jupyter kernel for Scala and Apache Spark.pdf
Luciano Resende
 
Using Elyra for COVID-19 Analytics
Luciano Resende
 
Elyra - a set of AI-centric extensions to JupyterLab Notebooks.
Luciano Resende
 
From Data to AI - Silicon Valley Open Source projects come to you - Madrid me...
Luciano Resende
 
Ai pipelines powered by jupyter notebooks
Luciano Resende
 
Strata - Scaling Jupyter with Jupyter Enterprise Gateway
Luciano Resende
 
Scaling notebooks for Deep Learning workloads
Luciano Resende
 
Jupyter Enterprise Gateway Overview
Luciano Resende
 
Inteligencia artificial, open source e IBM Call for Code
Luciano Resende
 
Getting insights from IoT data with Apache Spark and Apache Bahir
Luciano Resende
 
Open Source AI - News and examples
Luciano Resende
 
Building analytical microservices powered by jupyter kernels
Luciano Resende
 
An Enterprise Analytics Platform with Jupyter Notebooks and Apache Spark
Luciano Resende
 
The Analytic Platform behind IBM’s Watson Data Platform - Big Data Spain 2017
Luciano Resende
 
What's new in Apache SystemML - Declarative Machine Learning
Luciano Resende
 
Big analytics meetup - Extended Jupyter Kernel Gateway
Luciano Resende
 
Jupyter con meetup extended jupyter kernel gateway
Luciano Resende
 
Asf icfoss-mentoring
Luciano Resende
 
Open Source tools overview
Luciano Resende
 
Data access layer and schema definitions
Luciano Resende
 
Ad

Recently uploaded (20)

PPTX
04_Tamás Marton_Intuitech .pptx_AI_Barometer_2025
FinTech Belgium
 
PPTX
What Is Data Integration and Transformation?
subhashenia
 
PDF
UNISE-Operation-Procedure-InDHIS2trainng
ahmedabduselam23
 
PPTX
01_Nico Vincent_Sailpeak.pptx_AI_Barometer_2025
FinTech Belgium
 
PPTX
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
PPT
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
PPTX
Feb 2021 Ransomware Recovery presentation.pptx
enginsayin1
 
PPTX
apidays Singapore 2025 - Generative AI Landscape Building a Modern Data Strat...
apidays
 
PPTX
办理学历认证InformaticsLetter新加坡英华美学院毕业证书,Informatics成绩单
Taqyea
 
PDF
apidays Singapore 2025 - How APIs can make - or break - trust in your AI by S...
apidays
 
PDF
Business implication of Artificial Intelligence.pdf
VishalChugh12
 
PPTX
How to Add Columns and Rows in an R Data Frame
subhashenia
 
PPTX
Listify-Intelligent-Voice-to-Catalog-Agent.pptx
nareshkottees
 
PDF
Driving Employee Engagement in a Hybrid World.pdf
Mia scott
 
PPTX
big data eco system fundamentals of data science
arivukarasi
 
PDF
Research Methodology Overview Introduction
ayeshagul29594
 
PDF
OOPs with Java_unit2.pdf. sarthak bookkk
Sarthak964187
 
PDF
Optimizing Large Language Models with vLLM and Related Tools.pdf
Tamanna36
 
PDF
Group 5_RMB Final Project on circular economy
pgban24anmola
 
PDF
Technical-Report-GPS_GIS_RS-for-MSF-finalv2.pdf
KPycho
 
04_Tamás Marton_Intuitech .pptx_AI_Barometer_2025
FinTech Belgium
 
What Is Data Integration and Transformation?
subhashenia
 
UNISE-Operation-Procedure-InDHIS2trainng
ahmedabduselam23
 
01_Nico Vincent_Sailpeak.pptx_AI_Barometer_2025
FinTech Belgium
 
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
Feb 2021 Ransomware Recovery presentation.pptx
enginsayin1
 
apidays Singapore 2025 - Generative AI Landscape Building a Modern Data Strat...
apidays
 
办理学历认证InformaticsLetter新加坡英华美学院毕业证书,Informatics成绩单
Taqyea
 
apidays Singapore 2025 - How APIs can make - or break - trust in your AI by S...
apidays
 
Business implication of Artificial Intelligence.pdf
VishalChugh12
 
How to Add Columns and Rows in an R Data Frame
subhashenia
 
Listify-Intelligent-Voice-to-Catalog-Agent.pptx
nareshkottees
 
Driving Employee Engagement in a Hybrid World.pdf
Mia scott
 
big data eco system fundamentals of data science
arivukarasi
 
Research Methodology Overview Introduction
ayeshagul29594
 
OOPs with Java_unit2.pdf. sarthak bookkk
Sarthak964187
 
Optimizing Large Language Models with vLLM and Related Tools.pdf
Tamanna36
 
Group 5_RMB Final Project on circular economy
pgban24anmola
 
Technical-Report-GPS_GIS_RS-for-MSF-finalv2.pdf
KPycho
 

Writing Apache Spark and Apache Flink Applications Using Apache Bahir

  • 1. IBM SparkTechnology Center Apache Big Data Seville 2016 Apache Bahir Writing Applications using Apache Bahir Luciano Resende IBM | Spark Technology Center
  • 2. IBM SparkTechnology Center About Me Luciano Resende ([email protected]) • Architect and community liaison at IBM – Spark Technology Center • Have been contributing to open source at ASF for over 10 years • Currently contributing to : Apache Bahir, Apache Spark, Apache Zeppelin and Apache SystemML (incubating) projects 2 @lresende1975 http://lresende.blogspot.com/ https://www.linkedin.com/in/lresendehttp://slideshare.net/luckbr1975lresende
  • 3. IBM SparkTechnology Center Origins of the Apache Bahir Project MAY/2016: Established as a top-level Apache Project. • PMC formed by Apache Spark committers/pmc, Apache Members • Initial contributions imported from Apache Spark AUG/2016: Flink community join Apache Bahir • Initial contributions of Flink extensions • In October 2016 Robert Metzger elected committer
  • 4. IBM SparkTechnology Center The Apache Bahir name Naming an Apache Project is a science !!! • We needed a name that wasn’t used yet • Needed to be related to Spark We ended up with : Bahir • A name of Arabian origin that means Sparkling, • Also associated with a guy who succeeds at everything 4
  • 5. IBM SparkTechnology Center Why Apache Bahir It’s an Apache project • And if you are here, you know what it means What are the benefits of curating your extensions at Apache Bahir • Apache Governance • Apache License • Apache Community • Apache Brand 5
  • 6. IBM SparkTechnology Center Why Apache Bahir Flexibility • Release flexibility • Bounded to platform or component release Shared infrastructure • Release, CI, etc Shared knowledge • Collaborate with experts on both platform and component areas 6
  • 8. IBM SparkTechnology Center Apache Spark - Introduction What is Apache Spark ? 8 Spark Core Spark SQL Spark Streaming Spark ML Spark GraphX executes SQL statements performs streaming analytics using micro-batches common machine learning and statistical algorithms distributed graph processing framework general compute engine, handles distributed task dispatching, scheduling and basic I/O functions large variety of data sources and formats can be supported, both on- premise or cloud BigInsights (HDFS) Cloudant dashDB SQL DB
  • 9. IBM SparkTechnology Center Apache Spark – Spark SQL 9 Spark Core Spark SQL Spark Streaming Spark ML Spark GraphX ▪Unified data access: Query structured data sets with SQL or Dataset/DataFrame APIs ▪Fast, familiar query language across all of your enterprise dataRDBMS Data Sources Structured Streaming Data Sources
  • 10. IBM SparkTechnology Center Apache Spark – Spark SQL You can run SQL statement with SparkSession.sql(…) interface: val spark = SparkSession.builder() .appName(“Demo”) .getOrCreate() spark.sql(“create table T1 (c1 int, c2 int) stored as parquet”) val ds = spark.sql(“select * from T1”) You can further transform the resultant dataset: val ds1 = ds.groupBy(“c1”).agg(“c2”-> “sum”) val ds2 = ds.orderBy(“c1”) The result is a DataFrame / Dataset[Row] ds.show() displays the rows 10
  • 11. IBM SparkTechnology Center Apache Spark – Spark SQL You can read from data sources using SparkSession.read.format(…) val spark = SparkSession.builder() .appName(“Demo”) .getOrCreate() case class Bank(age: Integer, job: String, marital: String, education: String, balance: Integer) // loading csv data to a Dataset of Bank type val bankFromCSV = spark.read.csv(“hdfs://localhost:9000/data/bank.csv").as[Bank] // loading JSON data to a Dataset of Bank type val bankFromJSON = spark.read.json(“hdfs://localhost:9000/data/bank.json").as[Bank] // select a column value from the Dataset bankFromCSV.select(‘age).show() will return all rows of column “age” from this dataset. 11
  • 12. IBM SparkTechnology Center Apache Spark – Spark SQL You can also configure a specific data source with specific options val spark = SparkSession.builder() .appName(“Demo”) .getOrCreate() case class Bank(age: Integer, job: String, marital: String, education: String, balance: Integer) // loading csv data to a Dataset of Bank type val bankFromCSV = sparkSession.read .option("header", ”true") // Use first line of all files as header .option("inferSchema", ”true") // Automatically infer data types .option("delimiter", " ") .csv("/users/lresende/data.csv”) .as[Bank] bankFromCSV.select(‘age).show() // will return all rows of column “age” from this dataset. 12
  • 13. IBM SparkTechnology Center Apache Spark – Spark SQL Data Sources under the covers • Data source registration (e.g. spark.read.datasource) • Provide BaseRelation implementation • That implements support for table scans: • TableScans, PrunedScan, PrunedFilteredScan, CatalystScan • Detailed information available at • http://www.spark.tc/exploring-the-apache-spark-datasource-api/ 13
  • 14. IBM SparkTechnology Center Apache Spark – Spark SQL Structured Streaming Unified programming model for streaming, interactive and batch queries 14 Image source: https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html Considers the data stream as unbounded table
  • 15. IBM SparkTechnology Center Apache Spark – Spark SQL Structured Streaming SQL regular APIs val spark = SparkSession.builder() .appName(“Demo”) .getOrCreate() val input = spark.read .schema(schema) .format(”csv") .load(”input-path") val result = input .select(”age”) .where(”age > 18”) result.write .format(”json”) . save(” dest-path”) 15 Structured Streaming APIs val spark = SparkSession.builder() .appName(“Demo”) .getOrCreate() val input = spark.readStream .schema(schema) .format(”csv") .load(”input-path") val result = input .select(”age”) .where(”age > 18”) result.write .format(”json”) . startStream(” dest-path”)
  • 16. IBM SparkTechnology Center Apache Spark – Spark SQL Structured Streaming 16 Structured Streaming is an ALPHA feature
  • 17. IBM SparkTechnology Center Apache Spark – Spark Streaming 17 Spark Core Spark Streaming Spark SQL Spark ML Spark GraphX ▪Micro-batch event processing for near- real time analytics ▪e.g. Internet of Things (IoT) devices, Twitter feeds, Kafka (event hub), etc. ▪No multi-threading or parallel process programming required
  • 18. IBM SparkTechnology Center Apache Spark – Spark Streaming Also known as discretized stream or Dstream Abstracts a continuous stream of data Based on micro-batching 18
  • 19. IBM SparkTechnology Center Apache Spark – Spark Streaming val sparkConf = new SparkConf() .setAppName("MQTTWordCount") val ssc = new StreamingContext(sparkConf, Seconds(2)) val lines = MQTTUtils.createStream(ssc, brokerUrl, topic, StorageLevel.MEMORY_ONLY_SER_2) val words = lines.flatMap(x => x.split(" ")) val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _) wordCounts.print() ssc.start() ssc.awaitTermination() 19
  • 20. IBM SparkTechnology Center Apache Spark extensions in Bahir MQTT – Enables reading data from MQTT Servers using Spark Streaming or Structured streaming. • http://bahir.apache.org/docs/spark/current/spark-sql-streaming-mqtt/ • http://bahir.apache.org/docs/spark/current/spark-streaming-mqtt/ Twitter – Enables reading social data from twitter using Spark Streaming. • http://bahir.apache.org/docs/spark/current/spark-streaming-twitter/ Akka – Enables reading data from Akka Actors using Spark Streaming. • http://bahir.apache.org/docs/spark/current/spark-streaming-akka/ ZeroMQ – Enables reading data from ZeroMQ using Spark Streaming. • http://bahir.apache.org/docs/spark/current/spark-streaming-zeromq/ 20
  • 21. IBM SparkTechnology Center Apache Spark extensions coming soon to Bahir WebHDFS – Enables reading data from remote HDFS file system utilizing Spark SQL APIs • https://issues.apache.org/jira/browse/BAHIR-67 CounchDB / Cloudant– Enables reading data from CounchDB NoSQL document stores using Spark SQL APIs 21
  • 22. IBM SparkTechnology Center Apache Spark extensions in Bahir Adding Bahir extensions into your application • Using SBT • libraryDependencies += "org.apache.bahir" %% "spark-streaming-mqtt" % "2.1.0-SNAPSHOT” • Using Maven • <dependency> <groupId>org.apache.bahir</groupId> <artifactId>spark-streaming-mqtt_2.11 </artifactId> <version>2.1.0-SNAPSHOT</version> </dependency> 22
  • 23. IBM SparkTechnology Center Apache Spark extensions in Bahir Submitting applications with Bahir extensions to Spark • Spark-shell • bin/spark-shell --packages org.apache.bahir:spark-streaming_mqtt_2.11:2.1.0-SNAPSHOT ….. • Spark-submit • bin/spark-submit --packages org.apache.bahir:spark-streaming_mqtt_2.11:2.1.0-SNAPSHOT ….. 23
  • 25. IBM SparkTechnology Center Apache Flink extensions in Bahir Flink platform extensions added recently • https://github.com/apache/bahir-flink First release coming soon • Release discussions have started • Finishing up some basic documentation and examples • Should be available soon 25
  • 26. IBM SparkTechnology Center Apache Flink extensions in Bahir ActiveMQ – Enables reading and publishing data from ActiveMQ servers • https://github.com/apache/bahir-flink/blob/master/flink-connector-activemq/README.md Flume– Enables publishing data to Apache Flume • https://github.com/apache/bahir-flink/tree/master/flink-connector-flume Redis – Enables reading data to Redis and publishing data to Redis PubSub • https://github.com/apache/bahir-flink/blob/master/flink-connector-redis/README.md 26
  • 28. IBM SparkTechnology Center IoT Simulation using MQTT The demo environment https://github.com/lresende/bahir-iot-demo 28 Docker environment Mosquitto MQTT Server Node.js Webapplication Simulates Elevator IoT devices Elevator simulator Metrics: - Weight - Speed - Power - Temperature - System
  • 29. IBM SparkTechnology Center Join the Apache Bahir community !!! 29
  • 30. IBM SparkTechnology Center References Apache Bahir http://bahir.apache.org Documentation for Apache Spark extensions http://bahir.apache.org/docs/spark/current/documentation/ Source Repositories https://github.com/apache/bahir https://github.com/apache/bahir-flink https://github.com/apache/bahir-website 30 Image source: http://az616578.vo.msecnd.net/files/2016/03/21/6359412499310138501557867529_thank-you-1400x800-c-default.gif