Event: TDWI Accelerate Seattle, October 16, 2017
Topic: Distributed and In-Database Analytics with R
Presenter: Debraj GuhaThakurta
Description: How to develop scalable and in-DB analytics using R in Spark and SQL-Server
High Performance Predictive Analytics in R and HadoopDataWorks Summit
Hadoop is rapidly being adopted as a major platform for storing and managing massive amounts of data, and for computing descriptive and query types of analytics on that data. However, it has a reputation for not being a suitable environment for high performance complex iterative algorithms such as logistic regression, generalized linear models, and decision trees. At Revolution Analytics we think that reputation is unjustified, and in this talk I discuss the approach we have taken to porting our suite of High Performance Analytics algorithms to run natively and efficiently in Hadoop. Our algorithms are written in C++ and R, and are based on a platform that automatically and efficiently parallelizes a broad class of algorithms called Parallel External Memory Algorithms (PEMA’s). This platform abstracts both the inter-process communication layer and the data source layer, so that the algorithms can work in almost any environment in which messages can be passed among processes and with almost any data source. MPI and RPC are two traditional ways to send messages, but messages can also be passed using files, as in Hadoop. I describe how we use the file-based communication choreographed by MapReduce and how we efficiently access data stored in HDFS.
Model Building with RevoScaleR: Using R and Hadoop for Statistical ComputationRevolution Analytics
Slides from Joseph Rickert's presentation at Strata NYC 2013
"Using R and Hadoop for Statistical Computation at Scale"
http://strataconf.com/stratany2013/public/schedule/detail/30632
This document provides an overview of Apache Spark, including:
- A refresher on MapReduce and its processing model
- An introduction to Spark, describing how it differs from MapReduce in addressing some of MapReduce's limitations
- Examples of how Spark can be used, including for iterative algorithms and interactive queries
- Resources for free online training in Hadoop, MapReduce, Hive and using HBase with MapReduce and Hive
Build a Time Series Application with Apache Spark and Apache HBaseCarol McDonald
This document discusses using Apache Spark and Apache HBase to build a time series application. It provides an overview of time series data and requirements for ingesting, storing, and analyzing high volumes of time series data. The document then describes using Spark Streaming to process real-time data streams from sensors and storing the data in HBase. It outlines the steps in the lab exercise, which involves reading sensor data from files, converting it to objects, creating a Spark Streaming DStream, processing the DStream, and saving the data to HBase.
27 Aug 2013 Webinar High Performance Predictive Analytics in Hadoop and R presented by Mario E. Inchiosa, PhD., US Data Scientist and Kathleen Rohrecker, Director of Product Marketing
Doug Cutting on the State of the Hadoop EcosystemCloudera, Inc.
Doug Cutting, Apache Hadoop Co-founder, explains how the growth of the Hadoop ecosystem has made Hadoop a much more powerful machine, and how the continued expansion will lead to great things.
Spark & Cassandra at DataStax Meetup on Jan 29, 2015 Sameer Farooqui
Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. The document discusses Spark's architecture including its core abstraction of resilient distributed datasets (RDDs), and demos Spark's capabilities for streaming, SQL, machine learning and graph processing on large clusters.
Microsoft R server for distributed computing โดย กฤษฏิ์ คำตื้อ Technical Evangelist Microsoft (Thailand) Limited ในงาน THE FIRST NIDA BUSINESS ANALYTICS AND DATA SCIENCES CONTEST/CONFERENCE จัดโดย คณะสถิติประยุกต์และ DATA SCIENCES THAILAND
This document provides an overview of HBase and why NoSQL databases like HBase were developed. It discusses how relational databases do not scale horizontally well with large amounts of data. HBase was created by Google to address these scaling issues and was inspired by their BigTable database. The document explains the HBase data model with rows, columns, and versions. It describes how data is stored physically in HFiles and served from memory and disks. Basic operations like put, get, and scan are also covered.
Boosting spark performance: An Overview of TechniquesAhsan Javed Awan
This document provides an overview of techniques to boost Spark performance, including:
1) Phase 1 focused on memory management, code generation, and cache-aware algorithms which provided 5-30x speedups
2) Phase 2 focused on whole-stage code generation and columnar in-memory support which are now enabled by default in Spark 2.0+
3) Additional techniques discussed include choosing an optimal garbage collector, using multiple small executors, exploiting data locality, disabling hardware prefetchers, and keeping hyper-threading on.
The document provides an overview of the Apache Hadoop ecosystem. It describes Hadoop as a distributed, scalable storage and computation system based on Google's architecture. The ecosystem includes many related projects that interact, such as YARN, HDFS, Impala, Avro, Crunch, and HBase. These projects innovate independently but work together, with Hadoop serving as a flexible data platform at the core.
Talk at Hug FR on December 4, 2012 about the new Apache Drill project. Notably, this talk includes an introduction to the converging specification for the logical plan in Drill.
Spark Based Distributed Deep Learning Framework For Big Data Applications Humoyun Ahmedov
Deep Learning architectures, such as deep neural networks, are currently the hottest emerging areas of data science, especially in Big Data. Deep Learning could be effectively exploited to address some major issues of Big Data, such as fast information retrieval, data classification, semantic indexing and so on. In this work, we designed and implemented a framework to train deep neural networks using Spark, fast and general data flow engine for large scale data processing, which can utilize cluster computing to train large scale deep networks. Training Deep Learning models requires extensive data and computation. Our proposed framework can accelerate the training time by distributing the model replicas, via stochastic gradient descent, among cluster nodes for data resided on HDFS.
Apache Spark is a fast and general engine for large-scale data processing. It was originally developed in 2009 and is now supported by Databricks. Spark provides APIs in Java, Scala, Python and can run on Hadoop, Mesos, standalone or in the cloud. It provides high-level APIs like Spark SQL, MLlib, GraphX and Spark Streaming for structured data processing, machine learning, graph analytics and stream processing.
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Airbnb, Bloomberg, Comcast, Facebook, FINRA, LinkedIn, Lyft, Netflix, Twitter, and Uber, in the last few years Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments over Object Stores, HDFS, NoSQL and RDBMS data stores.
With the ever-growing list of connectors to new data sources such as Azure Blob Storage, Elasticsearch, Netflix Iceberg, Apache Kudu, and Apache Pulsar, recently introduced Cost-Based Optimizer in Presto must account for heterogeneous inputs with differing and often incomplete data statistics. This talk will explore this topic in detail as well as discuss best use cases for Presto across several industries. In addition, we will present recent Presto advancements such as Geospatial analytics at scale and the project roadmap going forward.
Presentation given by US Chief Scientist, Mario Inchiosa, at the June 2013 Hadoop Summit in San Jose, CA.
ABSTRACT: Hadoop is rapidly being adopted as a major platform for storing and managing massive amounts of data, and for computing descriptive and query types of analytics on that data. However, it has a reputation for not being a suitable environment for high performance complex iterative algorithms such as logistic regression, generalized linear models, and decision trees. At Revolution Analytics we think that reputation is unjustified, and in this talk I discuss the approach we have taken to porting our suite of High Performance Analytics algorithms to run natively and efficiently in Hadoop. Our algorithms are written in C++ and R, and are based on a platform that automatically and efficiently parallelizes a broad class of algorithms called Parallel External Memory Algorithms (PEMA’s). This platform abstracts both the inter-process communication layer and the data source layer, so that the algorithms can work in almost any environment in which messages can be passed among processes and with almost any data source. MPI and RPC are two traditional ways to send messages, but messages can also be passed using files, as in Hadoop. I describe how we use the file-based communication choreographed by MapReduce and how we efficiently access data stored in HDFS.
RISELab:Enabling Intelligent Real-Time DecisionsJen Aman
Spark Summit East Keynote by Ion Stoica
A long-standing grand challenge in computing is to enable machines to act autonomously and intelligently: to rapidly and repeatedly take appropriate actions based on information in the world around them. To address this challenge, at UC Berkeley we are starting a new five year effort that focuses on the development of data-intensive systems that provide Real-Time Intelligence with Secure Execution (RISE). Following in the footsteps of AMPLab, RISELab is an interdisciplinary effort bringing together researchers across AI, robotics, security, and data systems. In this talk I’ll present our research vision and then discuss some of the applications that will be enabled by RISE technologies.
Drill can query JSON data stored in various data sources like HDFS, HBase, and Hive. It allows running SQL queries over JSON data without requiring a fixed schema. The document describes how Drill enables ad-hoc querying of JSON-formatted Yelp business review data using SQL, providing insights faster than traditional approaches.
This document provides an overview of Apache Spark, including:
- What Spark is and how it differs from MapReduce by running computations in memory for improved performance on iterative algorithms.
- Examples of Spark's core APIs like RDDs (Resilient Distributed Datasets) and transformations like map, filter, reduceByKey.
- How Spark programs are executed through a DAG (Directed Acyclic Graph) and translated to physical execution plans with stages and tasks.
BKK16-408B Data Analytics and Machine Learning From Node to ClusterLinaro
Linaro is building an OpenStack based Developer Cloud. Here we present what was required to bring OpenStack to 64-bit ARM, the pitfalls, successes and lessons learnt; what’s missing and what’s next.
The document discusses resource tracking for Hadoop and Storm clusters at Yahoo. It describes how Yahoo developed tools over three years to track resource usage at the application, cluster, queue, user and project levels. This includes capturing CPU and memory usage for Hadoop YARN applications and Storm topologies. The data is stored and made available through dashboards and APIs. Yahoo also calculates total cost of ownership for Hadoop and converts resource usage to estimated monthly costs for projects. This visibility into usage and costs helps with capacity planning, operational efficiency, and ensuring fairness across grid users.
The increasing availability of mobile phones with embedded GPS devices and sensors has spurred the use of vehicle telematics in recent years. Telematics provides detailed and continuous information of a vehicle such as the location, speed, and movement. Vehicle telematics can be further linked with other spatial data to provide context to understand driving behaviors at the detailed level. However, the collection of high-frequency telematics data results in huge volumes of data that must be processed efficiently. And the raw sensor and GPS data must be properly pre-processed and transformed to extract signal relevant to downstream processes. In addition, driving behavior often depends on the spatial context, and the analysis of telematics must be contextualized using spatial and real-time traffic data.
Our talk covers the promises and challenges of telematics data. We present a framework for large-scaled telematics data analysis using Apache big data tools (Hadoop, Hive, Spark, Kafka, etc). We discuss common techniques to load and transform telematics data. We then present how to use machine learning on telematics data to derive insights about driving safety.
Speakers
Yanwei Zhang, Senior Data Scientists II, Uber
Neil Parker, Senior Software Engineer, Uber
MADlib Architecture and Functional Demo on How to Use MADlib/PivotalRPivotalOpenSourceHub
This document discusses the MADlib architecture for performing scalable machine learning and analytics on large datasets using massively parallel processing. It describes how MADlib implements algorithms like linear regression across distributed database segments to solve challenges like multiplying data across nodes. It also discusses how MADlib uses a convex optimization framework to iteratively solve machine learning problems and the use of streaming algorithms to compute analytics in a single data scan. Finally, it outlines how the MADlib architecture provides scalable machine learning capabilities to data scientists through interfaces like PivotalR.
On-Prem Solution for the Selection of Wind Energy ModelsDatabricks
This document summarizes Vestas Wind Systems' work on using deep learning models to improve wind resource modeling. Vestas has developed a tool called SiteHunt that provides wind resource data at different resolutions to help identify potential wind farm sites. The company is exploring using deep learning models to downscale lower resolution wind data to higher resolutions. An initial proof of concept showed a deep neural network improved downscaling accuracy compared to traditional methods. Ongoing work includes testing more advanced neural network architectures and automating the end-to-end modeling process.
Advanced Threat Detection on Streaming DataCarol McDonald
The document discusses using a stream processing architecture to enable real-time detection of advanced threats from large volumes of streaming data. The solution ingests data using fast distributed messaging like Kafka or MapR Streams. Complex event processing with Storm and Esper is used to detect patterns. Data is stored in scalable NoSQL databases like HBase and analyzed using machine learning. The parallelized, partitioned architecture allows for high performance and scalability.
R is a popular open-source statistical programming language and software environment for predictive analytics. It has a large community and ecosystem of packages that allow data scientists to solve various problems. Microsoft R Server is a scalable platform that allows R to handle large datasets beyond memory capacity by distributing computations across nodes in a cluster and storing data on disk in efficient column-based formats. It provides high performance through parallelization and rewriting algorithms in C++.
Microsoft R server for distributed computing โดย กฤษฏิ์ คำตื้อ Technical Evangelist Microsoft (Thailand) Limited ในงาน THE FIRST NIDA BUSINESS ANALYTICS AND DATA SCIENCES CONTEST/CONFERENCE จัดโดย คณะสถิติประยุกต์และ DATA SCIENCES THAILAND
This document provides an overview of HBase and why NoSQL databases like HBase were developed. It discusses how relational databases do not scale horizontally well with large amounts of data. HBase was created by Google to address these scaling issues and was inspired by their BigTable database. The document explains the HBase data model with rows, columns, and versions. It describes how data is stored physically in HFiles and served from memory and disks. Basic operations like put, get, and scan are also covered.
Boosting spark performance: An Overview of TechniquesAhsan Javed Awan
This document provides an overview of techniques to boost Spark performance, including:
1) Phase 1 focused on memory management, code generation, and cache-aware algorithms which provided 5-30x speedups
2) Phase 2 focused on whole-stage code generation and columnar in-memory support which are now enabled by default in Spark 2.0+
3) Additional techniques discussed include choosing an optimal garbage collector, using multiple small executors, exploiting data locality, disabling hardware prefetchers, and keeping hyper-threading on.
The document provides an overview of the Apache Hadoop ecosystem. It describes Hadoop as a distributed, scalable storage and computation system based on Google's architecture. The ecosystem includes many related projects that interact, such as YARN, HDFS, Impala, Avro, Crunch, and HBase. These projects innovate independently but work together, with Hadoop serving as a flexible data platform at the core.
Talk at Hug FR on December 4, 2012 about the new Apache Drill project. Notably, this talk includes an introduction to the converging specification for the logical plan in Drill.
Spark Based Distributed Deep Learning Framework For Big Data Applications Humoyun Ahmedov
Deep Learning architectures, such as deep neural networks, are currently the hottest emerging areas of data science, especially in Big Data. Deep Learning could be effectively exploited to address some major issues of Big Data, such as fast information retrieval, data classification, semantic indexing and so on. In this work, we designed and implemented a framework to train deep neural networks using Spark, fast and general data flow engine for large scale data processing, which can utilize cluster computing to train large scale deep networks. Training Deep Learning models requires extensive data and computation. Our proposed framework can accelerate the training time by distributing the model replicas, via stochastic gradient descent, among cluster nodes for data resided on HDFS.
Apache Spark is a fast and general engine for large-scale data processing. It was originally developed in 2009 and is now supported by Databricks. Spark provides APIs in Java, Scala, Python and can run on Hadoop, Mesos, standalone or in the cloud. It provides high-level APIs like Spark SQL, MLlib, GraphX and Spark Streaming for structured data processing, machine learning, graph analytics and stream processing.
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Airbnb, Bloomberg, Comcast, Facebook, FINRA, LinkedIn, Lyft, Netflix, Twitter, and Uber, in the last few years Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments over Object Stores, HDFS, NoSQL and RDBMS data stores.
With the ever-growing list of connectors to new data sources such as Azure Blob Storage, Elasticsearch, Netflix Iceberg, Apache Kudu, and Apache Pulsar, recently introduced Cost-Based Optimizer in Presto must account for heterogeneous inputs with differing and often incomplete data statistics. This talk will explore this topic in detail as well as discuss best use cases for Presto across several industries. In addition, we will present recent Presto advancements such as Geospatial analytics at scale and the project roadmap going forward.
Presentation given by US Chief Scientist, Mario Inchiosa, at the June 2013 Hadoop Summit in San Jose, CA.
ABSTRACT: Hadoop is rapidly being adopted as a major platform for storing and managing massive amounts of data, and for computing descriptive and query types of analytics on that data. However, it has a reputation for not being a suitable environment for high performance complex iterative algorithms such as logistic regression, generalized linear models, and decision trees. At Revolution Analytics we think that reputation is unjustified, and in this talk I discuss the approach we have taken to porting our suite of High Performance Analytics algorithms to run natively and efficiently in Hadoop. Our algorithms are written in C++ and R, and are based on a platform that automatically and efficiently parallelizes a broad class of algorithms called Parallel External Memory Algorithms (PEMA’s). This platform abstracts both the inter-process communication layer and the data source layer, so that the algorithms can work in almost any environment in which messages can be passed among processes and with almost any data source. MPI and RPC are two traditional ways to send messages, but messages can also be passed using files, as in Hadoop. I describe how we use the file-based communication choreographed by MapReduce and how we efficiently access data stored in HDFS.
RISELab:Enabling Intelligent Real-Time DecisionsJen Aman
Spark Summit East Keynote by Ion Stoica
A long-standing grand challenge in computing is to enable machines to act autonomously and intelligently: to rapidly and repeatedly take appropriate actions based on information in the world around them. To address this challenge, at UC Berkeley we are starting a new five year effort that focuses on the development of data-intensive systems that provide Real-Time Intelligence with Secure Execution (RISE). Following in the footsteps of AMPLab, RISELab is an interdisciplinary effort bringing together researchers across AI, robotics, security, and data systems. In this talk I’ll present our research vision and then discuss some of the applications that will be enabled by RISE technologies.
Drill can query JSON data stored in various data sources like HDFS, HBase, and Hive. It allows running SQL queries over JSON data without requiring a fixed schema. The document describes how Drill enables ad-hoc querying of JSON-formatted Yelp business review data using SQL, providing insights faster than traditional approaches.
This document provides an overview of Apache Spark, including:
- What Spark is and how it differs from MapReduce by running computations in memory for improved performance on iterative algorithms.
- Examples of Spark's core APIs like RDDs (Resilient Distributed Datasets) and transformations like map, filter, reduceByKey.
- How Spark programs are executed through a DAG (Directed Acyclic Graph) and translated to physical execution plans with stages and tasks.
BKK16-408B Data Analytics and Machine Learning From Node to ClusterLinaro
Linaro is building an OpenStack based Developer Cloud. Here we present what was required to bring OpenStack to 64-bit ARM, the pitfalls, successes and lessons learnt; what’s missing and what’s next.
The document discusses resource tracking for Hadoop and Storm clusters at Yahoo. It describes how Yahoo developed tools over three years to track resource usage at the application, cluster, queue, user and project levels. This includes capturing CPU and memory usage for Hadoop YARN applications and Storm topologies. The data is stored and made available through dashboards and APIs. Yahoo also calculates total cost of ownership for Hadoop and converts resource usage to estimated monthly costs for projects. This visibility into usage and costs helps with capacity planning, operational efficiency, and ensuring fairness across grid users.
The increasing availability of mobile phones with embedded GPS devices and sensors has spurred the use of vehicle telematics in recent years. Telematics provides detailed and continuous information of a vehicle such as the location, speed, and movement. Vehicle telematics can be further linked with other spatial data to provide context to understand driving behaviors at the detailed level. However, the collection of high-frequency telematics data results in huge volumes of data that must be processed efficiently. And the raw sensor and GPS data must be properly pre-processed and transformed to extract signal relevant to downstream processes. In addition, driving behavior often depends on the spatial context, and the analysis of telematics must be contextualized using spatial and real-time traffic data.
Our talk covers the promises and challenges of telematics data. We present a framework for large-scaled telematics data analysis using Apache big data tools (Hadoop, Hive, Spark, Kafka, etc). We discuss common techniques to load and transform telematics data. We then present how to use machine learning on telematics data to derive insights about driving safety.
Speakers
Yanwei Zhang, Senior Data Scientists II, Uber
Neil Parker, Senior Software Engineer, Uber
MADlib Architecture and Functional Demo on How to Use MADlib/PivotalRPivotalOpenSourceHub
This document discusses the MADlib architecture for performing scalable machine learning and analytics on large datasets using massively parallel processing. It describes how MADlib implements algorithms like linear regression across distributed database segments to solve challenges like multiplying data across nodes. It also discusses how MADlib uses a convex optimization framework to iteratively solve machine learning problems and the use of streaming algorithms to compute analytics in a single data scan. Finally, it outlines how the MADlib architecture provides scalable machine learning capabilities to data scientists through interfaces like PivotalR.
On-Prem Solution for the Selection of Wind Energy ModelsDatabricks
This document summarizes Vestas Wind Systems' work on using deep learning models to improve wind resource modeling. Vestas has developed a tool called SiteHunt that provides wind resource data at different resolutions to help identify potential wind farm sites. The company is exploring using deep learning models to downscale lower resolution wind data to higher resolutions. An initial proof of concept showed a deep neural network improved downscaling accuracy compared to traditional methods. Ongoing work includes testing more advanced neural network architectures and automating the end-to-end modeling process.
Advanced Threat Detection on Streaming DataCarol McDonald
The document discusses using a stream processing architecture to enable real-time detection of advanced threats from large volumes of streaming data. The solution ingests data using fast distributed messaging like Kafka or MapR Streams. Complex event processing with Storm and Esper is used to detect patterns. Data is stored in scalable NoSQL databases like HBase and analyzed using machine learning. The parallelized, partitioned architecture allows for high performance and scalability.
R is a popular open-source statistical programming language and software environment for predictive analytics. It has a large community and ecosystem of packages that allow data scientists to solve various problems. Microsoft R Server is a scalable platform that allows R to handle large datasets beyond memory capacity by distributing computations across nodes in a cluster and storing data on disk in efficient column-based formats. It provides high performance through parallelization and rewriting algorithms in C++.
Spark auf Hadoop ist hochskalierbar. Cloud Computing ist hochskalierbar. R, die erweiterbare Open Source Data Science Software, eher nicht. Aber was passiert, wenn wir Spark auf Hadoop, Cloud Computing und den Microsoft R Server zu einer skalierbaren Data Science-Plattform zusammenfügen? Stellen Sie sich vor wie es sein könnte, wenn Sie das Erkunden, Transformieren und Modellieren von Daten in jeder beliebigen Größe aus Ihrer Lieblings-R-Umgebung durchführen könnten. Stellen Sie sich nun vor, wie man anschließend die erzeugten Modelle - mit wenigen Klicks - als skalierbare, cloud basierte Web-Services-API bereitstellt. In dieser Session zeigt Sascha Dittmann, wie Sie Ihren R-Code, tausende von Open-Source-R-Pakete sowie die verteilte Implementierungen der beliebtesten Maschine-Learning-Algorithmen nutzen können, um genau dies umzusetzen. Dabei zeigt er wie man ein HDInsight Spark-Cluster inkl. eines Microsoft R Server-Clusters erstellt, sowie das daraus entstandene Model im SQL Server oder als swagger-based API für Anwendungsentwickler bereitstellt.
This document discusses building a scalable data science platform with R. It describes R as a popular statistical programming language with over 2.5 million users. It notes that while R is widely used, its open source nature means it lacks enterprise capabilities for large-scale use. The document then introduces Microsoft R Server as a way to bring enterprise capabilities like scalability, efficiency, and support to R in order to make it suitable for production use on big data problems. It provides examples of using R Server with Hadoop and HDInsight on the Azure cloud to operationalize advanced analytics workflows from data cleaning and modeling to deployment as web services at scale.
Revolution R Enterprise - Portland R User Group, November 2013Revolution Analytics
This document provides a summary of a presentation given at the Revolution R Enterprise User Group meeting in Portland on November 13, 2013. The presentation introduced Revolution R Enterprise as a platform for performing high performance big data analytics using the R programming language. Key capabilities of Revolution R Enterprise include parallel and distributed computing, integration with various data sources, and deploying models and results across different platforms. A demo was given showing how Revolution R Enterprise can be used to analyze large datasets using algorithms pre-parallelized for high performance.
Microsoft R Server allows users to run R code on large datasets in a distributed, parallel manner across SQL Server, Spark, and Hadoop without code changes. It provides scalable machine learning algorithms and tools to operationalize models for real-time scoring. The document discusses how R code can be run remotely on Hadoop and Spark clusters using technologies like RevoScaleR and Sparklyr for scalability.
Intro to big data analytics using microsoft machine learning server with sparkAlex Zeltov
Alex Zeltov - Intro to Big Data Analytics using Microsoft Machine Learning Server with Spark
By combining enterprise-scale R analytics software with the power of Apache Hadoop and Apache Spark, Microsoft R Server for HDP or HDInsight gives you the scale and performance you need. Multi-threaded math libraries and transparent parallelization in R Server handle up to 1000x more data and up to 50x faster speeds than open-source R, which helps you to train more accurate models for better predictions. R Server works with the open-source R language, so all of your R scripts run without changes.
Microsoft Machine Learning Server is your flexible enterprise platform for analyzing data at scale, building intelligent apps, and discovering valuable insights across your business with full support for Python and R. Machine Learning Server meets the needs of all constituents of the process – from data engineers and data scientists to line-of-business programmers and IT professionals. It offers a choice of languages and features algorithmic innovation that brings the best of open source and proprietary worlds together.
R support is built on a legacy of Microsoft R Server 9.x and Revolution R Enterprise products. Significant machine learning and AI capabilities enhancements have been made in every release. In 9.2.1, Machine Learning Server adds support for the full data science lifecycle of your Python-based analytics.
This meetup will NOT be a data science intro or R intro to programming. It is about working with data and big data on MLS .
- How to Scale R
- Work with R and Hadoop + Spark
-Demo of MLS on HDP/HDInsight server with RStudio
- How to operationalize deploying models using MLS Webservice operationalization features on MLS Server or on the cloud Azure ML (PaaS) offering. Speaker Bio:
Alex Zeltov is Big Data Solutions Architect / Software Engineer / Programmer Analyst / Data Scientist with over 19 years of industry experience in Information Technology and most recently in Big Data and Predictive Analytics. He currently works as Global black belt Technical Specialist in Microsoft where he concentrates on Big Data and Advanced Analytics use cases. Previously to joining Microsoft he worked as a Sr. Solutions Engineer at Hortonworks where he specialized in HDP and HDF platforms.
6° Sessione - Ambiti applicativi nella ricerca di tecnologie statistiche avan...Jürgen Ambrosi
In questa sessione vedremo, con il solito approccio pratico di demo hands on, come utilizzare il linguaggio R per effettuare analisi a valore aggiunto,
Toccheremo con mano le performance di parallelizzazione degli algoritmi, aspetto fondamentale per aiutare il ricercatore nel raggiungimento dei suoi obbiettivi.
In questa sessione avremo la partecipazione di Lorenzo Casucci, Data Platform Solution Architect di Microsoft.
A look under the hood at Apache Spark's API and engine evolutionsDatabricks
Spark has evolved its APIs and engine over the last 6 years to combine the best aspects of previous systems like databases, MapReduce, and data frames. Its latest structured APIs like DataFrames provide a declarative interface inspired by data frames in R/Python for ease of use, along with optimizations from databases for performance and future-proofing. This unified approach allows Spark to scale massively like MapReduce while retaining flexibility.
This document provides an agenda and summaries for a meetup on introducing DataFrames and R on Apache Spark. The agenda includes overviews of Apache Spark 1.3, DataFrames, R on Spark, and large scale machine learning on Spark. There will also be discussions on news items, contributions so far, what's new in Spark 1.3, more data source APIs, what DataFrames are, writing DataFrames, and DataFrames with RDDs and Parquet. Presentations will cover Spark components, an introduction to SparkR, and Spark machine learning experiences.
Title: Scalable R
Event description:
During this short session you will get introduced to Microsoft R for big data and its integration into (not only) Microsoft environment (SQL Server / Hadoop) with showcase of tools and code.
About speaker:
Michal Marusan origins comes from data warehousing and business intelligence on massively parallel database engines but for more than last five years he has been working on numerous Big Data and Advanced Analytics projects with different customers mainly from Telco, Banking and Transportation industry.
Michal’s focus and passion is helping customers with implementation of new analytical methods into their business environments to drive data-driven decisions and generate new business insights both in the cloud and on-premises systems.
Michal is member of Global Black Belt team, CEE Advanced Analytics and Big Data TSP at Microsoft.
Registration:
@Meetup.com group's event here & @Eventbrite registration here (if you use both your seat is guarateed). +our event you can find also @Facebook here.
[Disclaimer: If you use both (Meetup.com& Eventbrite) or at least one of them your seat is guarateed/if you just mark "going" @ this Facebook event we can't guarantee your seat].
Language of the event: R & Slovak
------------------------------------
R <- Slovakia [R enthusiasts and users, data scientists and statisticians of all levels from Slovakia]
------------------------------------
This meetup group is for Data Scientists, Statisticians, Economists and Data Enthusiasts using R for data analysis and data visualization. The goals are to provide R enthusiasts a place to share ideas and learn from each other about how best to apply the language and tools to ever-evolving challenges in the vast realm of data management, processing, analytics, and visualization.
--
PyData is a group for users and developers of data analysis tools to share ideas and learn from each other. We gather to discuss how best to apply Python tools, as well as those using R and Julia, to meet the evolving challenges in data management, processing, analytics, and visualization. PyData groups, events, and conferences aim to provide a venue for users acrossall the various domains of data analysis to share their experiences and their techniques. PyData is organized by NumFOCUS.org, a 501(c)3 non-profit in the United States.
This document provides a history and market overview of Apache Spark. It discusses the motivation for distributed data processing due to increasing data volumes, velocities and varieties. It then covers brief histories of Google File System, MapReduce, BigTable, and other technologies. Hadoop and MapReduce are explained. Apache Spark is introduced as a faster alternative to MapReduce that keeps data in memory. Competitors like Flink, Tez and Storm are also mentioned.
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...Chetan Khatri
This document summarizes a presentation about scaling terabytes of data with Apache Spark and Scala. The key points are:
1) The presenter discusses how to use Apache Spark and Scala to process large scale data in a distributed manner across clusters. Spark operations like RDDs, DataFrames and Datasets are covered.
2) A case study is presented about reengineering a data processing platform for a retail business to improve performance. Changes included parallelizing jobs, tuning Spark hyperparameters, and building a fast data architecture using Spark, Kafka and data lakes.
3) Performance was improved through techniques like dynamic resource allocation in YARN, reducing memory and cores per executor to better utilize cluster resources, and processing data
Big Data Analytics with Hadoop, MongoDB and SQL ServerMark Kromer
This document discusses SQL Server and big data analytics projects in the real world. It covers the big data technology landscape, big data analytics, and three big data analytics scenarios using different technologies like Hadoop, MongoDB, and SQL Server. It also discusses SQL Server's role in the big data world and how to get data into Hadoop for analysis.
Big data analysis using spark r publishedDipendra Kusi
SparkR enables large scale data analysis from R by leveraging Apache Spark's distributed processing capabilities. It allows users to load large datasets from sources like HDFS, run operations like filtering and aggregation in parallel, and build machine learning models like k-means clustering. SparkR also supports data visualization and exploration through packages like ggplot2. By running R programs on Spark, users can analyze datasets that are too large for a single machine.
The document provides an overview of data science with Python and integrating Python with Hadoop and Apache Spark frameworks. It discusses:
- Why Python should be integrated with Hadoop and the ecosystem including HDFS, MapReduce, and Spark.
- Key concepts of Hadoop including HDFS for storage, MapReduce for processing, and how Python can be integrated via APIs.
- Benefits of Apache Spark like speed, simplicity, and efficiency through its RDD abstraction and how PySpark enables Python access.
- Examples of using Hadoop Streaming and PySpark to analyze data and determine word counts from documents.
Extending the R API for Spark with sparklyr and Microsoft R Server with Ali Z...Databricks
There’s a growing number of data scientists that use R as their primary language. While the SparkR API has made tremendous progress since release 1.6, with major advancements in Apache Spark 2.0 and 2.1, it can be difficult for traditional R programmers to embrace the Spark ecosystem.
In this session, Zaidi will discuss the sparklyr package, which is a feature-rich and tidy interface for data science with Spark, and will show how it can be coupled with Microsoft R Server and extended with it’s lower-level API to become a full, first-class citizen of Spark. Learn how easy it is to go from single-threaded, memory-bound R functions to multi-threaded, multi-node, out-of-memory applications that can be deployed in a distributed cluster environment with minimal amount of code changes. You’ll also get best practices for reproducibility and performance by looking at a real-world case study of default risk classification and prediction entirely through R and Spark.
Microsoft R enable enterprise-wide, scalable experimental data science and operational machine learning, by providing a collection of servers and tools that extend the capabilities of open-source R In these slides, we give a quick introduction to Microsoft R Server architecture, and a comprehensive overview of ScaleR, the core libraries to Microsoft R, that enables parallel execution and use external data frames (xdfs). A tutorial-like presentation covering how to: 1) setup the environments, 2) read data, 3) process & transform, 4) analyse, summarize, visualize, 5) learn & predict, and finally 6) deploy and consume (using msrdeploy).
How iCode cybertech Helped Me Recover My Lost Fundsireneschmid345
I was devastated when I realized that I had fallen victim to an online fraud, losing a significant amount of money in the process. After countless hours of searching for a solution, I came across iCode cybertech. From the moment I reached out to their team, I felt a sense of hope that I can recommend iCode Cybertech enough for anyone who has faced similar challenges. Their commitment to helping clients and their exceptional service truly set them apart. Thank you, iCode cybertech, for turning my situation around!
[email protected]
computer organization and assembly language : its about types of programming language along with variable and array description..https://www.nfciet.edu.pk/
GenAI for Quant Analytics: survey-analytics.aiInspirient
Pitched at the Greenbook Insight Innovation Competition as apart of IIEX North America 2025 on 30 April 2025 in Washington, D.C.
Join us at survey-analytics.ai!
Just-in-time: Repetitive production system in which processing and movement of materials and goods occur just as they are needed, usually in small batches
JIT is characteristic of lean production systems
JIT operates with very little “fat”
Mieke Jans is a Manager at Deloitte Analytics Belgium. She learned about process mining from her PhD supervisor while she was collaborating with a large SAP-using company for her dissertation.
Mieke extended her research topic to investigate the data availability of process mining data in SAP and the new analysis possibilities that emerge from it. It took her 8-9 months to find the right data and prepare it for her process mining analysis. She needed insights from both process owners and IT experts. For example, one person knew exactly how the procurement process took place at the front end of SAP, and another person helped her with the structure of the SAP-tables. She then combined the knowledge of these different persons.
2. • Introduction to R
• Benefits and challenges
• R in Apache Spark: Distributed computing
• R in Databases: In-DB intelligence
Slideshare.net
3. • 3+M users
• Taught in most universities
• Thriving user groups worldwide
• 5th in 2016 IEEE Spectrum rank
• ~40% pro analysts prefer R (highest amongst R, SAS, python)
• 10,000+ contributed packages
• Many common use cases across industry
• Rich application & platform integration
What is
• The most popular statistical & ML programming language
• A data visualization tool
• Open source
Language
Platform
Community
Ecosystem
3
4. R Adoption is on a tear
76% of analytic
professionals use R
36% select R as
their primary tool
R Usage Growth
Rexer Data Miner Survey 2007-2015
2016 IEEE Spectrum rank
9. 9
Data processing and modeling with SparkR
MLlib: Apache Spark's scalable machine learning library
10. sparklyr: R interface for Apache Spark
Source: http://spark.rstudio.com/
• Easy installation from CRAN
• Loads data into SparkDataFrame from:
local R data frames, Hive tables, CSV,
JSON, and Parquet files.
• Connect to both local instances of
Spark and remote Spark clusters
10
11. dplyr and ML in sparklyr
• Includes 3 family of ML functions for machine learning pipeline
• ml_*: Machine learning algorithms for analyzing data provided by the spark.ml package.
• K-Means, GLM, LR, Survival Regression, DT, RF, GBT, PCA, Naive-Bayes, Multilayer Perceptron, LDA
• ft_*: Feature transformers for manipulating individual features.
• sdf_*: Functions for manipulating SparkDataFrames.
• Provides a complete dplyr backend for data manipulation and
analysis
%>%
11
12. h2o: prediction engine in R
http://www.h2o.ai/product/
• Open source ML platform
• Optimized for “in memory” distributed, parallel ML
• Data manipulation and modeling on H2OFrame:
R functions + h2o pre-fixed functions.
• Transformations: h2o.group_by(), h2o.impute()
• Statistics: h2o.summary(), h2o.quantile(), h2o.mean()
• Algorithms: h2o.glm(), h2o.naiveBayes(),
h2o.deeplearning(), h2o.kmeans(), ...
• rsparkling package: h2o on Spark
• Provides bindings to h2o’s machine learning
algorithms: extension package for sparklyr
• Simple data conversion: SparkDataFrame ->
H2OFrame
12
https://github.com/h2oai/rsparkling
13. ML Server 9.x: Scale-out R
• 100% compatible with open source R
• Virtually any code/package that works today with R will work in ML Server.
• Ability to parallelize any R function
• Ideal for parameter sweeps, simulation, scoring.
• Wide range of scalable and distributed rx pre-fixed functions in
RevoScaleR package.
• Transformations: rxDataStep()
• Statistics: rxSummary(), rxQuantile(), rxChiSquaredTest(), rxCrossTabs()…
• Algorithms: rxLinMod(), rxLogit(), rxKmeans(), rxBTrees(), rxDForest()…
• Parallelism: rxSetComputeContext()
13
15. ScaleR library: parallel and portable for Big Data
Stream data into blocks from sources: Hive tables, CSV, Parquet,
XDF, ODBC and SQL Server.
ScaleR algorithms work inside
multiple cores / nodes in
parallel at high speed
Interim results are collected and
combined analytically to
produce the output on the
entire data set
XDF file format is optimised to work with the ScaleR library and
significantly speeds up iterative algorithm processing.
15
16. Write once - deploy anywhere (WODA)
ScaleR: Portable across multiple platforms – local, Spark, SQL-Server, etc.
Models can be trained in one and deployed in another
### SETUP SPARK/HADOOP ENVIRONMENT VARIABLES ###
mySparkCC <- RxSpark()
### HADOOP COMPUTE CONTEXT ###
rxSetComputeContext(mySparkCC)
### CREATE HDFS, DIRECTORY AND FILE OBJECTS ###
hdfsFS <- RxHdfsFileSystem()
AirlineDataSet <- RxXdfData(“airline_20MM.xdf”,
fileSystem = hdfsFS)
### ANALYTICAL PROCESSING ###
### Statistical Summary of the data
rxSummary( ~ ArrDelay + DayOfWeek, data = AirlineDataSet, reportProgress = 1)
### Linear model and plot
hdfsXdfArrLateLinMod <- rxLinMod(ArrDelay ~ DayOfWeek + CRSDepTime, data = AirlineDataSet)
plot(hdfsXdfArrLateLinMod$coefficients)
### SETUP LOCAL ENVIRONMENT VARIABLES ###
myLocalCC <- “localpar”
### LOCAL COMPUTE CONTEXT ###
rxSetComputeContext(myLocalCC)
### CREATE LINUX, DIRECTORY AND FILE OBJECTS ###
linuxFS <- RxNativeFileSystem( )
AirlineDataSet <- RxXdfData(“airline_20MM.xdf”,
fileSystem = linuxFS)
Local Parallel processing - Linux or Windows In – Spark
Compute
context R script
- sets where the
model will run
Functional model
R script – does
not need to
change to run in
Spark
16
17. Spark clusters in Azure HDInsight
• Provisions Azure compute
resources with Spark 2.1
installed and configured.
• Supports multiple versions
(e.g. Spark 1.6).
• Stores data in Azure Blob
storage (WASB), Azure Data
Lake Store or Local HDFS.
17
18. ML Server Spark cluster architecture
Master R process on Edge Node
Apache YARN and Spark
Worker R processes on Data Nodes
R R R R R
R R R R R
ML Server
Data in Distributed Storage
R process on Edge Node
18
19. Model deployment using ML Server
operationalization services (mrsdeploy)
Data Scientist
Developer
Easy Integration
Easy Deployment
Easy Setup
In-cloud or on-prem
Adding nodes to scale
High availability & load balancing
Remote execution server
Microsoft ML Server
configured for
operationalizing R analytics
Microsoft R Client
(mrsdeploy package)
Easy Consumption
publishServiceMicrosoft R Client
(mrsdeploy package)
Data Scientist
19
24. ML Server on Hadoop/HDInsight scales to hundreds of
nodes, billions of rows and terabytes of data
0 1 2 3 4 5 6 7 8 9 10 11 12 13
ElapsedTime
Billions of rows
Logistic Regression on NYC Taxi Dataset
2.2 TB
25. Base and scalable approaches comparison
Approach Scalability Spark Hadoop SQL Server Teradata Support
CRAN R1 Single machines Community
SparkR Single + Distributed
computing
X Community
sparklyr Single + Distributed
computing
X Community
h2o Single + Distributed
computing
X X Community
RevoScaleR Single + Distributed
computing
X X X X Enterprise
1. CRAN R indicates no additional R packages installed
25
tinyurl.com/Strata2017R
https://aka.ms/kdd2017r
30. For Oracle In DB analytics, see: https://www.oracle.com/database/advanced-
analytics/index.html
31. In-database machine learning
Develop Train Deploy Consume
Develop, explore and
experiment in your favorite
IDE
Train models with
sp_execute_external_
script and save the
models in database
Deploy your ML scripts
with sp_execute_external_
script and predict using the
models
Make your app/reports
intelligent by consuming
predictions
31
32. Eliminate data movement
Operationalize ML scripts and models
Enterprise grade performance and scale
SQL Transformations
Relational data
Analytics library
32