The document summarizes the analysis of Meetup streaming data using various technologies. It discusses using the Meetup API data stream, an AWS VM running MapR services, implementing Lambda architecture for batch and real-time processing. It also summarizes inserting streaming data from Kafka into HBase, moving data from HBase to Hive, visualizing trends by state and performing sentiment analysis on event comments.
Spark Streaming, Machine Learning and meetup.com streaming API.Sergey Zelvenskiy
Spark Streaming allows processing of live data streams using the Spark framework. This document discusses using Spark Streaming to process event streams from Meetup.com, including RSVP data and event metadata. It describes extracting features from event descriptions, clustering events based on these features, and using the results to recommend connections between Meetup members with similar interests.
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Guido Schmutz
Spark Structured Streaming vs. Kafka Streams was compared. Spark Structured Streaming runs on a Spark cluster and allows reuse of Spark investments, while Kafka Streams is a Java library that provides low latency continuous processing. Both platforms support stateful operations like windows, aggregations and joins. Spark Structured Streaming supports multiple languages but has higher latency due to micro-batching, while Kafka Streams currently only supports Java but provides lower latency continuous processing.
Closing the Loop in Extended Reality with Kafka Streams and Machine Learning ...confluent
We’ve built a real-time streaming platform that enables prediction based on user behavior, with events occurring in virtual and augmented reality environments. The solution enables organizations to train people in an extended reality environment, where real-life training may be costly and dangerous. Kafka Streams enables analyzing spatial and event data to detect gestural feature and analyze user behavior in real-time to be able to predict any future mistake the user might make. Kafka is the backbone of our real-time analytics and extended reality communication platform with our cluster and applications being deployed on Kubernetes.
In this talk, we will mainly focus on the following: 1. Why Extended Reality with Kafka is a step in the right direction. 2. Architecture & Power of Schema Registry in building a generic platform for pluggable XR apps and analytics models 3. How KSQL and Kafka Streams fits in Kafka Ecosystem to help analyze human motion data and detect features for real-time prediction. 4. Demo of a VR application with real-time analytics feedback, which assists people to be trained in how to work with chemical laboratory equipment.
Bellevue Big Data meetup: Dive Deep into Spark StreamingSantosh Sahoo
Discuss the code and architecture about building realtime streaming application using Spark and Kafka. This demo presents some use cases and patterns of different streaming frameworks.
Speaker: Matthias J. Sax, Software Engineer, Confluent
KSQL is the Streaming SQL engine for Apache Kafka that allows for continuous data stream processing. While KSQL looks very similar to SQL, it provides quite different semantics. First, KSQL queries can be defined over data streams, not just tables. Second, queries over tables are no snapshot queries, but run forever. And third, time is a core concept in KSQL and data stream processing in general. In this talk, we explore the nature of Streaming SQL and its temporal semantics that apply to both streams and tables. We will explain continuous queries semantics, the relationship between streams and tables, and demystify the temporal nature of KSQL tables. Furthermore, we dig into filter, aggregation, and join operations over stream and tables as well as stream specific operators like windowing. At the end, you will be equipped to query streams and tables using KSQL and understand their temporal relationship to each other.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Data processing platforms architectures with Spark, Mesos, Akka, Cassandra an...Anton Kirillov
This talk is about architecture designs for data processing platforms based on SMACK stack which stands for Spark, Mesos, Akka, Cassandra and Kafka. The main topics of the talk are:
- SMACK stack overview
- storage layer layout
- fixing NoSQL limitations (joins and group by)
- cluster resource management and dynamic allocation
- reliable scheduling and execution at scale
- different options for getting the data into your system
- preparing for failures with proper backup and patching strategies
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming.
In particular, I’m going to discuss the following.
• Different stateful operations in Structured Streaming
• How state data is stored in a distributed, fault-tolerant manner using State Stores
• How you can write custom State Stores for saving state to external storage systems.
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
In this talk, we will introduce some of the new available APIs around stateful aggregation in Structured Streaming, namely flatMapGroupsWithState. We will show how this API can be used to power many complex real-time workflows, including stream-to-stream joins, through live demos using Databricks and Apache Kafka.
Predictive Maintenance at the Dutch Railways with Ivo EvertsDatabricks
At the Dutch Railways, we are collecting 10s of billions sensor measurements coming from the train fleet and railroad every year. We use these data to conduct predictive maintenance, such as predicting failure of the train axle bearings and detecting air leakage in the train braking pipes. This is extremely useful, as these failures are notoriously difficult to detect during regular maintenance, while occurring frequently and causing severe delays, damage to material and reputy, and costs.
In this talk, we present how we use the compressor logs in order to detect the occurrence of air leakage in the train braking pipes. Compressor run- and idle- times are extracted from the logs and modelled by a logistic regressor for discriminating between the two classes in normal operational mode. Air leakage will cause the idle times to become shorter as air pressure needs to be leveled more frequently, which can be detected with the logistic model. Then, with a density-based clustering technique, a sequence of such events can be identified, while ignoring outliers due to circumstantial phenomena such as power outage. These clusters are associated to levels of severity, based on which a trend analysis can unlock the expected number of days the compressor will still function before breaking down. This method has been developed by Wan-Jui Lee of the Dutch Railways and published as “Anomaly Detection and Severity Prediction of Air Leakage in Train Braking Pipes” in the International Journal of Prognostics and Health Management in 2017. We have implemented the methods as described in the paper using Python and Spark in a production environment.
Apache Kafka and KSQL in Action: Let's Build a Streaming Data Pipeline!confluent
This document provides an overview and introduction to Apache Kafka and KSQL for building streaming data pipelines. It discusses how Kafka is an event streaming platform that can be used for messaging, streaming data, and stream processing. It then introduces KSQL, which is a streaming SQL engine for Apache Kafka that allows users to perform stream processing by writing SQL-like queries against Kafka topics. The document uses diagrams and examples to illustrate how to build a streaming data pipeline using Kafka Connect to ingest data, Kafka to store and transport streams, and KSQL to perform stream processing, enrichment, and analytics.
Deep dive into stateful stream processing in structured streaming by Tathaga...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming. In particular, I am going to discuss the following. – Different stateful operations in Structured Streaming – How state data is stored in a distributed, fault-tolerant manner using State Stores – How you can write custom State Stores for saving state to external storage systems.
Spark Streaming allows real-time processing of live data streams. It works by dividing the streaming data into batches called DStreams, which are then processed using Spark's batch API. Common sources of data include Kafka, files, and sockets. Transformations like map, reduce, join and window can be applied to DStreams. Stateful operations like updateStateByKey allow updating persistent state. Checkpointing to reliable storage like HDFS provides fault-tolerance.
C*ollege Credit: CEP Distribtued Processing on Cassandra with StormDataStax
Cassandra provides facilities to integrate with Hadoop. This is sufficient for distributed batch processing, but doesn’t address CEP distributed processing. This webinar will demonstrate use of Cassandra in Storm. Storm provides a data flow and processing layer that can be used to integrate Cassandra with other external persistences mechanisms (e.g. Elastic Search) or calculate dimensional counts for reporting and dashboards. We’ll dive into a sample Storm topology that reads and writes from Cassandra using storm-cassandra bolts.
Apache Spark for Library Developers with William Benton and Erik ErlandsonDatabricks
As a developer, data engineer, or data scientist, you’ve seen how Apache Spark is expressive enough to let you solve problems elegantly and efficient enough to let you scale out to handle more data. However, if you’re solving the same problems again and again, you probably want to capture and distribute your solutions so that you can focus on new problems and so other people can reuse and remix them: you want to develop a library that extends Spark.
You faced a learning curve when you first started using Spark, and you’ll face a different learning curve as you start to develop reusable abstractions atop Spark. In this talk, two experienced Spark library developers will give you the background and context you’ll need to turn your code into a library that you can share with the world. We’ll cover: Issues to consider when developing parallel algorithms with Spark, Designing generic, robust functions that operate on data frames and datasets, Extending data frames with user-defined functions (UDFs) and user-defined aggregates (UDAFs), Best practices around caching and broadcasting, and why these are especially important for library developers, Integrating with ML pipelines, Exposing key functionality in both Python and Scala, and How to test, build, and publish your library for the community.
We’ll back up our advice with concrete examples from real packages built atop Spark. You’ll leave this talk informed and inspired to take your Spark proficiency to the next level and develop and publish an awesome library of your own.
Performance Analysis and Optimizations for Kafka Streams ApplicationsGuozhang Wang
High-speed and low footprint data stream processing is high in demand for Kafka Streams applications. However, how to write an efficient streaming application using the Streams DSL has been asked by many users in the past since it requires some deep knowledge about Kafka Streams internals. In this talk, I will talk about how to analyze your Kafka Streams applications, target performance bottlenecks and unnecessary storage costs, and optimize your application code accordingly using the Streams DSL.
In addition, I will talk about the new optimization framework that we have been developed inside Kafka Streams since the 2.1 release which replaced the in-place translation of the Streams DSL into a comprehensive process composed of streams topology compilation and rewriting phases, with a focus on reducing various storage footprints of Streams applications, such as state stores, internal topics etc.
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch AnalysisHelena Edelson
Slides from my talk with Evan Chan at Strata San Jose: NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis. Streaming analytics architecture in big data for fast streaming, ad hoc and batch, with Kafka, Spark Streaming, Akka, Mesos, Cassandra and FiloDB. Simplifying to a unified architecture.
The document discusses the SMACK stack 1.1, which includes tools for streaming, Mesos, analytics, Cassandra, and Kafka. It describes how SMACK stack 1.1 adds capabilities for dynamic compute, microservices, orchestration, and microsegmentation. It also provides examples of running Storm on Mesos and using Apache Kafka for decoupling data pipelines.
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
This document provides an overview of Spark Streaming concepts including:
- Streams are sequences of data elements made available over time that can be accessed sequentially
- Stream processing involves continuously and concurrently processing live data streams in micro-batches
- Spark Streaming provides scalable and fault-tolerant stream processing using a micro-batch architecture where streams are divided into batches that are processed through transformations on resilient distributed datasets (RDDs)
- Transformations on DStreams apply operations like map, filter, reduce to the underlying RDDs of each batch
Last year, in Apache Spark 2.0, Databricks introduced Structured Streaming, a new stream processing engine built on Spark SQL, which revolutionized how developers could write stream processing application. Structured Streaming enables users to express their computations the same way they would express a batch query on static data. Developers can express queries using powerful high-level APIs including DataFrames, Dataset and SQL. Then, the Spark SQL engine is capable of converting these batch-like transformations into an incremental execution plan that can process streaming data, while automatically handling late, out-of-order data and ensuring end-to-end exactly-once fault-tolerance guarantees.
Since Spark 2.0, Databricks has been hard at work building first-class integration with Kafka. With this new connectivity, performing complex, low-latency analytics is now as easy as writing a standard SQL query. This functionality, in addition to the existing connectivity of Spark SQL, makes it easy to analyze data using one unified framework. Users can now seamlessly extract insights from data, independent of whether it is coming from messy / unstructured files, a structured / columnar historical data warehouse, or arriving in real-time from Kafka/Kinesis.
In this session, Das will walk through a concrete example where – in less than 10 lines – you read Kafka, parse JSON payload data into separate columns, transform it, enrich it by joining with static data and write it out as a table ready for batch and ad-hoc queries on up-to-the-last-minute data. He’ll use techniques including event-time based aggregations, arbitrary stateful operations, and automatic state management using event-time watermarks.
Re-envisioning the Lambda Architecture : Web Services & Real-time Analytics ...Brian O'Neill
This document summarizes Brian O'Neill's talk on re-envisioning the Lambda architecture using Storm and Cassandra for real-time analytics of web services data. The talk covered using polyglot persistence with technologies like Kafka, Cassandra, Elasticsearch and Titan to build scalable data pipelines. It also discussed using Storm and Trident to build real-time analytics topologies to compute metrics like averages across partitions in Cassandra using conditional updates. The talk concluded by proposing embedding the batch computation layer within the stream processing layer to enable code and logic reuse across layers.
Crossing the Streams: Rethinking Stream Processing with Kafka Streams and KSQLconfluent
The document discusses stream processing with Apache Kafka and KSQL. It begins with an overview of Kafka Streams and KSQL, describing them as Java APIs and a SQL-like language for building real-time applications. It then highlights several key features of Kafka Streams, including its ability to elastically scale applications across instances and deploy them anywhere, as well as its support for exactly-once processing, event-time processing, and powerful stream operations. The document concludes by discussing how KSQL lowers the barrier to entry for stream processing and enables different users like developers and analysts to interact with streaming data.
The document introduces the Kafka Streams Processor API. It provides more fine-grained control over event processing compared to the Kafka Streams DSL. The Processor API allows access to state stores, record metadata, and scheduled processing via punctuators. It can be used to augment applications built with the Kafka Streams DSL by providing capabilities like random access to state stores and time-based processing.
A Tale of Two APIs: Using Spark Streaming In ProductionLightbend
Fast Data architectures are the answer to the increasing need for the enterprise to process and analyze continuous streams of data to accelerate decision making and become reactive to the particular characteristics of their market.
Apache Spark is a popular framework for data analytics. Its capabilities include SQL-based analytics, dataflow processing, graph analytics and a rich library of built-in machine learning algorithms. These libraries can be combined to address a wide range of requirements for large-scale data analytics.
To address Fast Data flows, Spark offers two API's: The mature Spark Streaming and its younger sibling, Structured Streaming. In this talk, we are going to introduce both APIs. Using practical examples, you will get a taste of each one and obtain guidance on how to choose the right one for your application.
Spark Streaming Programming Techniques You Should Know with Gerard MaasSpark Summit
At its heart, Spark Streaming is a scheduling framework, able to efficiently collect and deliver data to Spark for further processing. While the DStream abstraction provides high-level functions to process streams, several operations also grant us access to deeper levels of the API, where we can directly operate on RDDs, transform them to Datasets to make use of that abstraction or store the data for later processing. Between these API layers lie many hooks that we can manipulate to enrich our Spark Streaming jobs. In this presentation we will demonstrate how to tap into the Spark Streaming scheduler to run arbitrary data workloads, we will show practical uses of the forgotten ‘ConstantInputDStream’ and will explain how to combine Spark Streaming with probabilistic data structures to optimize the use of memory in order to improve the resource usage of long-running streaming jobs. Attendees of this session will come out with a richer toolbox of techniques to widen the use of Spark Streaming and improve the robustness of new or existing jobs.
Speaker: Matthias J. Sax, Software Engineer, Confluent
KSQL is the Streaming SQL engine for Apache Kafka that allows for continuous data stream processing. While KSQL looks very similar to SQL, it provides quite different semantics. First, KSQL queries can be defined over data streams, not just tables. Second, queries over tables are no snapshot queries, but run forever. And third, time is a core concept in KSQL and data stream processing in general. In this talk, we explore the nature of Streaming SQL and its temporal semantics that apply to both streams and tables. We will explain continuous queries semantics, the relationship between streams and tables, and demystify the temporal nature of KSQL tables. Furthermore, we dig into filter, aggregation, and join operations over stream and tables as well as stream specific operators like windowing. At the end, you will be equipped to query streams and tables using KSQL and understand their temporal relationship to each other.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Data processing platforms architectures with Spark, Mesos, Akka, Cassandra an...Anton Kirillov
This talk is about architecture designs for data processing platforms based on SMACK stack which stands for Spark, Mesos, Akka, Cassandra and Kafka. The main topics of the talk are:
- SMACK stack overview
- storage layer layout
- fixing NoSQL limitations (joins and group by)
- cluster resource management and dynamic allocation
- reliable scheduling and execution at scale
- different options for getting the data into your system
- preparing for failures with proper backup and patching strategies
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming.
In particular, I’m going to discuss the following.
• Different stateful operations in Structured Streaming
• How state data is stored in a distributed, fault-tolerant manner using State Stores
• How you can write custom State Stores for saving state to external storage systems.
Arbitrary Stateful Aggregations using Structured Streaming in Apache SparkDatabricks
In this talk, we will introduce some of the new available APIs around stateful aggregation in Structured Streaming, namely flatMapGroupsWithState. We will show how this API can be used to power many complex real-time workflows, including stream-to-stream joins, through live demos using Databricks and Apache Kafka.
Predictive Maintenance at the Dutch Railways with Ivo EvertsDatabricks
At the Dutch Railways, we are collecting 10s of billions sensor measurements coming from the train fleet and railroad every year. We use these data to conduct predictive maintenance, such as predicting failure of the train axle bearings and detecting air leakage in the train braking pipes. This is extremely useful, as these failures are notoriously difficult to detect during regular maintenance, while occurring frequently and causing severe delays, damage to material and reputy, and costs.
In this talk, we present how we use the compressor logs in order to detect the occurrence of air leakage in the train braking pipes. Compressor run- and idle- times are extracted from the logs and modelled by a logistic regressor for discriminating between the two classes in normal operational mode. Air leakage will cause the idle times to become shorter as air pressure needs to be leveled more frequently, which can be detected with the logistic model. Then, with a density-based clustering technique, a sequence of such events can be identified, while ignoring outliers due to circumstantial phenomena such as power outage. These clusters are associated to levels of severity, based on which a trend analysis can unlock the expected number of days the compressor will still function before breaking down. This method has been developed by Wan-Jui Lee of the Dutch Railways and published as “Anomaly Detection and Severity Prediction of Air Leakage in Train Braking Pipes” in the International Journal of Prognostics and Health Management in 2017. We have implemented the methods as described in the paper using Python and Spark in a production environment.
Apache Kafka and KSQL in Action: Let's Build a Streaming Data Pipeline!confluent
This document provides an overview and introduction to Apache Kafka and KSQL for building streaming data pipelines. It discusses how Kafka is an event streaming platform that can be used for messaging, streaming data, and stream processing. It then introduces KSQL, which is a streaming SQL engine for Apache Kafka that allows users to perform stream processing by writing SQL-like queries against Kafka topics. The document uses diagrams and examples to illustrate how to build a streaming data pipeline using Kafka Connect to ingest data, Kafka to store and transport streams, and KSQL to perform stream processing, enrichment, and analytics.
Deep dive into stateful stream processing in structured streaming by Tathaga...Databricks
Stateful processing is one of the most challenging aspects of distributed, fault-tolerant stream processing. The DataFrame APIs in Structured Streaming make it very easy for the developer to express their stateful logic, either implicitly (streaming aggregations) or explicitly (mapGroupsWithState). However, there are a number of moving parts under the hood which makes all the magic possible. In this talk, I am going to dive deeper into how stateful processing works in Structured Streaming. In particular, I am going to discuss the following. – Different stateful operations in Structured Streaming – How state data is stored in a distributed, fault-tolerant manner using State Stores – How you can write custom State Stores for saving state to external storage systems.
Spark Streaming allows real-time processing of live data streams. It works by dividing the streaming data into batches called DStreams, which are then processed using Spark's batch API. Common sources of data include Kafka, files, and sockets. Transformations like map, reduce, join and window can be applied to DStreams. Stateful operations like updateStateByKey allow updating persistent state. Checkpointing to reliable storage like HDFS provides fault-tolerance.
C*ollege Credit: CEP Distribtued Processing on Cassandra with StormDataStax
Cassandra provides facilities to integrate with Hadoop. This is sufficient for distributed batch processing, but doesn’t address CEP distributed processing. This webinar will demonstrate use of Cassandra in Storm. Storm provides a data flow and processing layer that can be used to integrate Cassandra with other external persistences mechanisms (e.g. Elastic Search) or calculate dimensional counts for reporting and dashboards. We’ll dive into a sample Storm topology that reads and writes from Cassandra using storm-cassandra bolts.
Apache Spark for Library Developers with William Benton and Erik ErlandsonDatabricks
As a developer, data engineer, or data scientist, you’ve seen how Apache Spark is expressive enough to let you solve problems elegantly and efficient enough to let you scale out to handle more data. However, if you’re solving the same problems again and again, you probably want to capture and distribute your solutions so that you can focus on new problems and so other people can reuse and remix them: you want to develop a library that extends Spark.
You faced a learning curve when you first started using Spark, and you’ll face a different learning curve as you start to develop reusable abstractions atop Spark. In this talk, two experienced Spark library developers will give you the background and context you’ll need to turn your code into a library that you can share with the world. We’ll cover: Issues to consider when developing parallel algorithms with Spark, Designing generic, robust functions that operate on data frames and datasets, Extending data frames with user-defined functions (UDFs) and user-defined aggregates (UDAFs), Best practices around caching and broadcasting, and why these are especially important for library developers, Integrating with ML pipelines, Exposing key functionality in both Python and Scala, and How to test, build, and publish your library for the community.
We’ll back up our advice with concrete examples from real packages built atop Spark. You’ll leave this talk informed and inspired to take your Spark proficiency to the next level and develop and publish an awesome library of your own.
Performance Analysis and Optimizations for Kafka Streams ApplicationsGuozhang Wang
High-speed and low footprint data stream processing is high in demand for Kafka Streams applications. However, how to write an efficient streaming application using the Streams DSL has been asked by many users in the past since it requires some deep knowledge about Kafka Streams internals. In this talk, I will talk about how to analyze your Kafka Streams applications, target performance bottlenecks and unnecessary storage costs, and optimize your application code accordingly using the Streams DSL.
In addition, I will talk about the new optimization framework that we have been developed inside Kafka Streams since the 2.1 release which replaced the in-place translation of the Streams DSL into a comprehensive process composed of streams topology compilation and rewriting phases, with a focus on reducing various storage footprints of Streams applications, such as state stores, internal topics etc.
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch AnalysisHelena Edelson
Slides from my talk with Evan Chan at Strata San Jose: NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis. Streaming analytics architecture in big data for fast streaming, ad hoc and batch, with Kafka, Spark Streaming, Akka, Mesos, Cassandra and FiloDB. Simplifying to a unified architecture.
The document discusses the SMACK stack 1.1, which includes tools for streaming, Mesos, analytics, Cassandra, and Kafka. It describes how SMACK stack 1.1 adds capabilities for dynamic compute, microservices, orchestration, and microsegmentation. It also provides examples of running Storm on Mesos and using Apache Kafka for decoupling data pipelines.
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
This document provides an overview of Spark Streaming concepts including:
- Streams are sequences of data elements made available over time that can be accessed sequentially
- Stream processing involves continuously and concurrently processing live data streams in micro-batches
- Spark Streaming provides scalable and fault-tolerant stream processing using a micro-batch architecture where streams are divided into batches that are processed through transformations on resilient distributed datasets (RDDs)
- Transformations on DStreams apply operations like map, filter, reduce to the underlying RDDs of each batch
Last year, in Apache Spark 2.0, Databricks introduced Structured Streaming, a new stream processing engine built on Spark SQL, which revolutionized how developers could write stream processing application. Structured Streaming enables users to express their computations the same way they would express a batch query on static data. Developers can express queries using powerful high-level APIs including DataFrames, Dataset and SQL. Then, the Spark SQL engine is capable of converting these batch-like transformations into an incremental execution plan that can process streaming data, while automatically handling late, out-of-order data and ensuring end-to-end exactly-once fault-tolerance guarantees.
Since Spark 2.0, Databricks has been hard at work building first-class integration with Kafka. With this new connectivity, performing complex, low-latency analytics is now as easy as writing a standard SQL query. This functionality, in addition to the existing connectivity of Spark SQL, makes it easy to analyze data using one unified framework. Users can now seamlessly extract insights from data, independent of whether it is coming from messy / unstructured files, a structured / columnar historical data warehouse, or arriving in real-time from Kafka/Kinesis.
In this session, Das will walk through a concrete example where – in less than 10 lines – you read Kafka, parse JSON payload data into separate columns, transform it, enrich it by joining with static data and write it out as a table ready for batch and ad-hoc queries on up-to-the-last-minute data. He’ll use techniques including event-time based aggregations, arbitrary stateful operations, and automatic state management using event-time watermarks.
Re-envisioning the Lambda Architecture : Web Services & Real-time Analytics ...Brian O'Neill
This document summarizes Brian O'Neill's talk on re-envisioning the Lambda architecture using Storm and Cassandra for real-time analytics of web services data. The talk covered using polyglot persistence with technologies like Kafka, Cassandra, Elasticsearch and Titan to build scalable data pipelines. It also discussed using Storm and Trident to build real-time analytics topologies to compute metrics like averages across partitions in Cassandra using conditional updates. The talk concluded by proposing embedding the batch computation layer within the stream processing layer to enable code and logic reuse across layers.
Crossing the Streams: Rethinking Stream Processing with Kafka Streams and KSQLconfluent
The document discusses stream processing with Apache Kafka and KSQL. It begins with an overview of Kafka Streams and KSQL, describing them as Java APIs and a SQL-like language for building real-time applications. It then highlights several key features of Kafka Streams, including its ability to elastically scale applications across instances and deploy them anywhere, as well as its support for exactly-once processing, event-time processing, and powerful stream operations. The document concludes by discussing how KSQL lowers the barrier to entry for stream processing and enables different users like developers and analysts to interact with streaming data.
The document introduces the Kafka Streams Processor API. It provides more fine-grained control over event processing compared to the Kafka Streams DSL. The Processor API allows access to state stores, record metadata, and scheduled processing via punctuators. It can be used to augment applications built with the Kafka Streams DSL by providing capabilities like random access to state stores and time-based processing.
A Tale of Two APIs: Using Spark Streaming In ProductionLightbend
Fast Data architectures are the answer to the increasing need for the enterprise to process and analyze continuous streams of data to accelerate decision making and become reactive to the particular characteristics of their market.
Apache Spark is a popular framework for data analytics. Its capabilities include SQL-based analytics, dataflow processing, graph analytics and a rich library of built-in machine learning algorithms. These libraries can be combined to address a wide range of requirements for large-scale data analytics.
To address Fast Data flows, Spark offers two API's: The mature Spark Streaming and its younger sibling, Structured Streaming. In this talk, we are going to introduce both APIs. Using practical examples, you will get a taste of each one and obtain guidance on how to choose the right one for your application.
Spark Streaming Programming Techniques You Should Know with Gerard MaasSpark Summit
At its heart, Spark Streaming is a scheduling framework, able to efficiently collect and deliver data to Spark for further processing. While the DStream abstraction provides high-level functions to process streams, several operations also grant us access to deeper levels of the API, where we can directly operate on RDDs, transform them to Datasets to make use of that abstraction or store the data for later processing. Between these API layers lie many hooks that we can manipulate to enrich our Spark Streaming jobs. In this presentation we will demonstrate how to tap into the Spark Streaming scheduler to run arbitrary data workloads, we will show practical uses of the forgotten ‘ConstantInputDStream’ and will explain how to combine Spark Streaming with probabilistic data structures to optimize the use of memory in order to improve the resource usage of long-running streaming jobs. Attendees of this session will come out with a richer toolbox of techniques to widen the use of Spark Streaming and improve the robustness of new or existing jobs.
The document discusses Apache Spark, an open-source cluster computing framework. It describes Spark's core components like Spark SQL, MLlib, and GraphX. It provides examples of using Spark from Python and Scala for word count tasks and joining datasets. It also demonstrates running Spark interactively on a Spark REPL and deploying Spark on Amazon EMR. Key points are that Spark can handle batch, interactive, and real-time processing and integrates with Python, Scala, and Java while programming at a higher level of abstraction than MapReduce.
Big Data Analytics with Scala at SCALA.IO 2013Samir Bessalah
This document provides an overview of big data analytics with Scala, including common frameworks and techniques. It discusses Lambda architecture, MapReduce, word counting examples, Scalding for batch and streaming jobs, Apache Storm, Trident, SummingBird for unified batch and streaming, and Apache Spark for fast cluster computing with resilient distributed datasets. It also covers clustering with Mahout, streaming word counting, and analytics platforms that combine batch and stream processing.
Knoldus organized a Meetup on 1 April 2015. In this Meetup, we introduced Spark with Scala. Apache Spark is a fast and general engine for large-scale data processing. Spark is used at a wide range of organizations to process large datasets.
No more struggles with Apache Spark workloads in productionChetan Khatri
Paris Scala Group Event May 2019, No more struggles with Apache Spark workloads in production.
Apache Spark
Primary data structures (RDD, DataSet, Dataframe)
Pragmatic explanation - executors, cores, containers, stage, job, a task in Spark.
Parallel read from JDBC: Challenges and best practices.
Bulk Load API vs JDBC write
An optimization strategy for Joins: SortMergeJoin vs BroadcastHashJoin
Avoid unnecessary shuffle
Alternative to spark default sort
Why dropDuplicates() doesn’t result consistency, What is alternative
Optimize Spark stage generation plan
Predicate pushdown with partitioning and bucketing
Why not to use Scala Concurrent ‘Future’ explicitly!
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
Simplifying Big Data Analytics with Apache SparkDatabricks
Apache Spark is a fast and general-purpose cluster computing system for large-scale data processing. It improves on MapReduce by allowing data to be kept in memory across jobs, enabling faster iterative jobs. Spark consists of a core engine along with libraries for SQL, streaming, machine learning, and graph processing. The document discusses new APIs in Spark including DataFrames, which provide a tabular interface like in R/Python, and data sources, which allow plugging external data systems into Spark. These changes aim to make Spark easier for data scientists to use at scale.
ScalaTo July 2019 - No more struggles with Apache Spark workloads in productionChetan Khatri
Scala Toronto July 2019 event at 500px.
Pure Functional API Integration
Apache Spark Internals tuning
Performance tuning
Query execution plan optimisation
Cats Effects for switching execution model runtime.
Discovery / experience with Monix, Scala Future.
Apache Spark is a fast and general cluster computing system that improves efficiency through in-memory computing and usability through rich APIs. Spark SQL provides a way to work with structured data and transform RDDs using SQL. It can read data from sources like Parquet and JSON files, Hive, and write query results to Parquet for efficient querying. Spark SQL also allows machine learning pipelines to be built by connecting SQL queries to MLlib algorithms.
Lightning fast analytics with Spark and Cassandranickmbailey
Spark is a fast and general engine for large-scale data processing. It provides APIs for Java, Scala, and Python that allow users to load data into a distributed cluster as resilient distributed datasets (RDDs) and then perform operations like map, filter, reduce, join and save. The Cassandra Spark driver allows accessing Cassandra tables as RDDs to perform analytics and run Spark SQL queries across Cassandra data. It provides server-side data selection and mapping of rows to Scala case classes or other objects.
This document provides a summary of existing big data tools. It outlines the layered architecture of these tools, including layers for resource management, file systems, data processing frameworks, machine learning libraries, NoSQL databases and more. It also describes several common data processing models (e.g. MapReduce, DAG, graph processing) and specific tools that use each model (e.g. Hadoop for MapReduce, Spark for DAG). Examples of code for PageRank and broadcasting data in the Harp framework are also provided.
Wprowadzenie do technologi Big Data i Apache HadoopSages
The document introduces concepts related to Big Data technology including volume, variety, and velocity of data. It discusses Hadoop architecture including HDFS, MapReduce, YARN, and the Hadoop ecosystem. Examples are provided of common Big Data problems and how they can be solved using Hadoop frameworks like Pig, Hive, and Ambari.
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Databricks
Description:
We are amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application, which we will discuss.
Abstract:
We are amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application.
In this talk we will explore the concepts and motivations behind the continuous application, how Structured Streaming Python APIs in Apache Spark 2.x enables writing continuous applications, examine the programming model behind Structured Streaming, and look at the APIs that support them.
Through a short demo and code examples, I will demonstrate how to write an end-to-end Structured Streaming application that reacts and interacts with both real-time and historical data to perform advanced analytics using Spark SQL, DataFrames and Datasets APIs.
You’ll walk away with an understanding of what’s a continuous application, appreciate the easy-to-use Structured Streaming APIs, and why Structured Streaming in Apache Spark 2.x is a step forward in developing new kinds of streaming applications.
Writing Continuous Applications with Structured Streaming PySpark APIDatabricks
"We're amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application.
In this tutorial we'll explore the concepts and motivations behind the continuous application, how Structured Streaming Python APIs in Apache Spark™ enable writing continuous applications, examine the programming model behind Structured Streaming, and look at the APIs that support them.
Through presentation, code examples, and notebooks, I will demonstrate how to write an end-to-end Structured Streaming application that reacts and interacts with both real-time and historical data to perform advanced analytics using Spark SQL, DataFrames and Datasets APIs.
You’ll walk away with an understanding of what’s a continuous application, appreciate the easy-to-use Structured Streaming APIs, and why Structured Streaming in Apache Spark is a step forward in developing new kinds of streaming applications.
This tutorial will be both instructor-led and hands-on interactive session. Instructions in how to get tutorial materials will be covered in class.
WHAT YOU’LL LEARN:
– Understand the concepts and motivations behind Structured Streaming
– How to use DataFrame APIs
– How to use Spark SQL and create tables on streaming data
– How to write a simple end-to-end continuous application
PREREQUISITES
– A fully-charged laptop (8-16GB memory) with Chrome or Firefox
–Pre-register for Databricks Community Edition"
Speaker: Jules Damji
Using spark 1.2 with Java 8 and CassandraDenis Dus
Brief introduction in Spark data processing ideology, comparison Java 7 and Java 8 usage with Spark. Examples of loading and processing data with Spark Cassandra Loader.
Lightning fast analytics with Spark and CassandraRustam Aliyev
Spark is an open-source cluster computing framework that provides fast and general engine for large-scale data processing. It is up to 100x faster than Hadoop for certain applications. The Cassandra Spark driver allows accessing Cassandra tables as resilient distributed datasets (RDDs) in Spark, enabling analytics like joins, aggregations, and machine learning on Cassandra data. It maps Cassandra data types to Scala types and rows to case classes. This allows querying, transforming, and saving data to and from Cassandra using Spark's APIs and optimizations for performance and fault tolerance.
Our product uses third generation Big Data technologies and Spark Structured Streaming to enable comprehensive Digital Transformation. It provides a unified streaming API that allows for continuous processing, interactive queries, joins with static data, continuous aggregations, stateful operations, and low latency. The presentation introduces Spark Structured Streaming's basic concepts including loading from stream sources like Kafka, writing to sinks, triggers, SQL integration, and mixing streaming with batch processing. It also covers continuous aggregations with windows, stateful operations with checkpointing, reading from and writing to Kafka, and benchmarks compared to other streaming frameworks.
Keeping Spark on Track: Productionizing Spark for ETLDatabricks
ETL is the first phase when building a big data processing platform. Data is available from various sources and formats, and transforming the data into a compact binary format (Parquet, ORC, etc.) allows Apache Spark to process it in the most efficient manner. This talk will discuss common issues and best practices for speeding up your ETL workflows, handling dirty data, and debugging tips for identifying errors.
Speakers: Kyle Pistor & Miklos Christine
This talk was originally presented at Spark Summit East 2017.
Codepot - Pig i Hive: szybkie wprowadzenie / Pig and Hive crash courseSages
Szybkie wprowadzenie do technologii Pig i Hive z ekosystemu Hadoop. Prezentacja wykonana w ramach warsztatów Codepot w dniu 29.08.2015. Prezentacja wykonana przez Radosława Stankiewicza oraz Bartłomieja Tartanusa.
Blueprint Series: Banking In The Cloud – Ultra-high Reliability ArchitecturesMatt Stubbs
This document discusses the challenges of building reliable banking architectures in the cloud and how Starling Bank addressed this issue. It introduces some key concepts like distributed architectures, self-contained systems, and the DITTO architecture which focuses on idempotency and eventual consistency. The benefits of this approach for Starling Bank included safe instance termination, continuous delivery of backend changes up to 5 times a day using chat-ops releases, and the ability to "chaos test" to ensure reliability.
Speed Up Your Apache Cassandra™ Applications: A Practical Guide to Reactive P...Matt Stubbs
Speaker: Cedrick Lunven, Developer Advocate, DataStax
Speaker Bio: Cedrick is a Developer Advocate at DataStax where he finds opportunities to share his passions by speaking about developing distributed architectures and implementing reference applications for developers. In 2013, he created FF4j, an open source framework for Feature Toggle which he still actively maintains. He is now contributor in JHipster team.
Talk Synopsis: We have all introduced more or less functional programming and asynchronous operations into our applications in order to speed up and distribute treatments (e.g., multi-threading, future, completableFuture, etc.). To build truly non-blocking components, optimize resource usage, and avoid "callback hell" you have to think reactive—everything is an event.
From the frontend UI to database communications, it’s now possible to develop Java applications as fully reactive with frameworks like Spring WebFlux and Reactor. With high throughput and tunable consistency, applications built on top of Apache Cassandra™ fit perfectly within this pattern.
DataStax has been developing Apache Cassandra drivers for years, and in the latest version of the enterprise driver we introduced reactive programming.
During this session we will migrate, step by step, a vanilla CRUD Java service (SpringBoot / SpringMVC) into reactive with both code review and live coding. Bring home a working project!
Filmed at Skills Matter/Code Node London on 9th May 2019 as part of the Big Data LDN Meetup Blueprint Series.
Meetup sponsored by DataStax.
Blueprint Series: Expedia Partner Solutions, Data PlatformMatt Stubbs
Join Anselmo for an engaging overview of the new end-to-end data architecture at Expedia Group, taking a journey through cloud and on-prem data lakes, real-time and batch processes and streamlined access for data producers and consumers. Find out how the new architecture unifies a complex mix of data sources and feeds the data science development cycle. Expedia might appear to be a market-leading travel company – in reality, it’s a highly successful technology and data science company.
Blueprint Series: Architecture Patterns for Implementing Serverless Microserv...Matt Stubbs
Richard Freeman talks about how the data science team at JustGiving built KOALA, a fully serverless stack for real-time web analytics capture, stream processing, metrics API, and storage service, supporting live data at scale from over 26M users. He discusses recent advances in serverless computing, and how you can implement traditionally container-based microservice patterns using serverless-based architectures instead. Deploying Serverless in your organisation can dramatically increase the delivery speed, productivity and flexibility of the development team, while reducing the overall running, DevOps and maintenance costs.
Big Data LDN 2018: DATABASE FOR THE INSTANT EXPERIENCEMatt Stubbs
Date: 14th November 2018
Location: Customer Experience Theatre
Time: 12:30 - 13:00
Speaker: David Maitland
Organisation: Redis Labs
About: This session will cover the technology underpinning at the software infrastructure level required to deliver the instant experience to the end user and enterprises alike. Use cases and value derived by major brands will be shared in this insightful session based the world's most loved database REDIS.
Big Data LDN 2018: BIG DATA TOO SLOW? SPRINKLE IN SOME NOSQLMatt Stubbs
Date: 14th November 2018
Location: Customer Experience Theatre
Time: 11:50 - 12:20
Speaker: Perry Krug
Organisation: Couchbase
About: Who wants to see an ad today for the shoes they bought last week? Everyone knows that customer experience is driven by data: don't waste an opportunity to get them the right data at the right time. Real-time results are critical, but raw speed isn't everything: you need power and flexibility to react to changes on the fly. Come learn how market-leading enterprises are using Couchbase as their speed layer for ingestion, incremental view and presentation layers alongside Kafka, Spark and Hadoop to liberate their data lakes.
Big Data LDN 2018: ENABLING DATA-DRIVEN DECISIONS WITH AUTOMATED INSIGHTSMatt Stubbs
Date: 13th November 2018
Location: Customer Experience Theatre
Time: 11:50 - 12:20
Speaker: Charlotte Emms
Organisation: seenit
About: How do you get your colleagues interested in the power of data? Taking you through Seenit’s journey using Couchbase's NoSQL database to create a regular, fully automated update in an easily digestible format.
Big Data LDN 2018: DATA MANAGEMENT AUTOMATION AND THE INFORMATION SUPPLY CHAI...Matt Stubbs
Date: 14th November 2018
Location: Governance and MDM Theatre
Time: 10:30 - 11:00
Speaker: Mike Ferguson
Organisation: IBS
About: For most organisations today, data complexity has increased rapidly. In the area of operations, we now have cloud and on-premises OLTP systems with customers, partners and suppliers accessing these applications via APIs and mobile apps. In the area of analytics, we now have data warehouse, data marts, big data Hadoop systems, NoSQL databases, streaming data platforms, cloud storage, cloud data warehouses, and IoT-generated data being created at the edge. Also, the number of data sources is exploding as companies ingest more and more external data such as weather and open government data. Silos have also appeared everywhere as business users are buying in self-service data preparation tools without consideration for how these tools integrate with what IT is using to integrate data. Yet new regulations are demanding that we do a better job of governing data, and business executives are demanding more agility to remain competitive in a digital economy. So how can companies remain agile, reduce cost and reduce the time-to-value when data complexity is on the up?
In this session, Mike will discuss how companies can create an information supply chain to manufacture business-ready data and analytics to reduce time to value and improve agility while also getting data under control.
Date: 13th November 2018
Location: Governance and MDM Theatre
Time: 12:30 - 13:00
Organisation: Immuta
About: Artificial intelligence is rising in importance, but it’s also increasingly at loggerheads with data protection regimes like the GDPR—or so it seems. In this talk, Sophie will explain where and how AI and GDPR conflict with one another, and how to resolve these tensions.
Big Data LDN 2018: REALISING THE PROMISE OF SELF-SERVICE ANALYTICS WITH DATA ...Matt Stubbs
Date: 13th November 2018
Location: Governance and MDM Theatre
Time: 11:50 - 12:20
Speaker: Mark Pritchard
Organisation: Denodo
About: Self-service analytics promises to liberate business users to perform analytics without the assistance of IT, and this in turn promises to free IT to focus on enhancing the infrastructure.
Join us to learn how data virtualization will allow you to gain real-time access to enterprise-wide data and deliver self-service analytics. We will explore how you can seamlessly unify fragmented data, replace your high-maintenance and high cost data integrations with a single, low-maintenance data virtualization layer; and how you can preserve your data integrity and ensure data lineage is fully traceable.
Big Data LDN 2018: TURNING MULTIPLE DATA LAKES INTO A UNIFIED ANALYTIC DATA L...Matt Stubbs
Date: 13th November 2018
Location: Governance and MDM Theatre
Time: 11:10 - 11:40
Organisation: TIBCO
About: The big data phenomenon continues to accelerate, resulting in multiple data lakes at most organisations. However, according to Gartner, “Through 2019, 90% of the information assets from big data analytic efforts will be siloed and unusable across multiple business processes.”
Are you ready to unleash this data from these silos and deliver the insights your organisation needs to drive compelling customer experiences, innovative new products and optimized operations? In this session you will learn how to apply data virtualisation to: - Access, transform and deliver data from across your lakes, clouds and other data sources - Empower a range of analytic users and tools with all the data they need - Move rapidly to a modern and flexible data architecture for the long run In addition, you will see a demonstration of data virtualisation in action.
Big Data LDN 2018: MICROSOFT AZURE AND CLOUDERA – FLEXIBLE CLOUD, WHATEVER TH...Matt Stubbs
Microsoft and Cloudera have partnered to help customers realize insights from big data using cloud services. With Cloudera Enterprise deployed on Azure, customers can visualize data with Power BI and gain insights within minutes. Cloudera provides solutions for data warehousing, data science, and hybrid deployments that fulfill enterprise requirements around flexibility, manageability, and security on Azure.
Big Data LDN 2018: CONSISTENT SECURITY, GOVERNANCE AND FLEXIBILITY FOR ALL WO...Matt Stubbs
The document discusses Cloudera's Shared Data Experience (SDX) which provides consistent security, governance and flexibility for workloads both on-premises and in the cloud. SDX offers a common set of services including security, governance, lifecycle management and data cataloging that can be shared across different workloads regardless of deployment location. This addresses challenges of managing multiple isolated clusters and allows for easier data sharing and reuse across applications. SDX provides a single source of truth for data through its shared services.
Big Data LDN 2018: MICROLISE: USING BIG DATA AND AI IN TRANSPORT AND LOGISTICSMatt Stubbs
Date: 14th November 2018
Location: Data-Driven Ldn Theatre
Time: 11:10 - 11:40
Organisation: Microlise
About: Microlise are a leading provider of technology solutions to the transport and logistics industry worldwide. Discover how, with over 400,000 connected assets generating billions of messages a day, Microlise is evolving its platform to bring real-time analytics to its customers to improve safety, security and efficiency outcomes.
Big Data LDN 2018: EXPERIAN: MAXIMISE EVERY OPPORTUNITY IN THE BIG DATA UNIVERSEMatt Stubbs
Date: 14th November 2018
Location: Data-Driven Ldn Theatre
Time: 10:30 - 11:00
Speaker: Anna Matty
Organisation: Experian
About: Today there is a widespread focus on the 'how' in relation to problem solving. How can we gain better knowledge of what consumers want, or need? How can we be more efficient, reduce the cost to serve, or grow the lifetime value of a customer? But, how do you move to a place where you are not only solving a problem, you are redesigning the entire strategic potential of that problem? You are being armed with insight on what the problem is.
Data and innovation offer huge potential to revolutionise all markets. There is an opportunity to be one step ahead of the need, to redesign journeys and enhance enterprise strategies. To do this you need access to the most advanced analytics but also the best quality, including variations and types of data, and then the technology that can act on this insight. Data science can present a unique opportunity for uncovered growth and accelerate your business through strategic innovation – fast. In this session you will hear more about how today's analytics can move from a single task, to an ongoing strategic opportunity. An opportunity that helps you move at the speed of the market and helps you maximise every opportunity.
Big Data LDN 2018: A LOOK INSIDE APPLIED MACHINE LEARNINGMatt Stubbs
Date: 13th November 2018
Location: Data-Driven Ldn Theatre
Time: 13:10 - 13:40
Speaker: Brian Goral
Organisation: Cloudera
About: The field of machine learning (ML) ranges from the very practical and pragmatic to the highly theoretical and abstract. This talk describes several of the challenges facing organisations that want to leverage more of their data through ML, including some examples of the applied algorithms that are already delivering value in business contexts.
Big Data LDN 2018: DEUTSCHE BANK: THE PATH TO AUTOMATION IN A HIGHLY REGULATE...Matt Stubbs
Date: 13th November 2018
Location: Data-Driven Ldn Theatre
Time: 12:30 - 13:00
Speaker: Paul Wilkinson, Naveen Gupta
Organisation: Cloudera
About: Investment banks are faced with some of the toughest regulatory requirements in the world. In a market where data is increasing and changing at extraordinary rates the journey with data governance never ends.
In this session, Deutsche Bank will share their journey with big data and explain some of the processes and techniques they have employed to prepare the bank for today’s challenges and tomorrow’s opportunities.
Brought to you by Naveen Gupta, VP Software Engineering, Deutsche Bank and Paul Wilkinson, Principal Solutions Architect, Cloudera.
Big Data LDN 2018: FROM PROLIFERATION TO PRODUCTIVITY: MACHINE LEARNING DATA ...Matt Stubbs
Date: 14th November 2018
Location: Self-Service Analytics Theatre
Time: 13:50 - 14:20
Speaker: Stephanie McReynolds
Organisation: Alation
About: Raw data is proliferating at an enormous rate. But so are our derived data assets - hundreds of dashboards, thousands of reports, millions of transformed data sets. Self-service analytics have ensured that this noise is making it increasingly hard to understand and trust data for decision-making. This trust gap is holding your organisation back from business outcomes.
European analytics leaders have found a way to close the gap between data and decision-making. From MunichRe to Pfizer and Daimler, analytics teams are adopting data catalogues for thousands of self-service analytics users.
Join us in this session to hear how data catalogues that activate data by incorporating machine learning can:
• Increase analyst productivity 20-40%
• Boost the understanding of the nuances of data and
• Establish trust in data-driven decisions with agile stewardship
Big Data LDN 2018: DATA APIS DON’T DISCRIMINATEMatt Stubbs
Date: 13th November 2018
Location: Self-Service Analytics Theatre
Time: 15:50 - 16:20
Speaker: Nishanth Kadiyala
Organisation: Progress
About: The exploding API economy, combined with an advanced analytics market projected to reach $30 billion by 2019, is forcing IT to expose more and more data through APIs. Business analysts, data engineers, and data scientists are still not happy because their needs never really made it into the existing API strategies. This is because most APIs are designed for application integration, but not for the data workers who are looking for APIs that facilitate direct data access to run complex analytics. Data APIs are specifically designed to provide that frictionless data access experience to support analytics across standard interoperable interfaces such as OData (REST) or ODBC/JDBC (SQL). Consider expanding your API strategy to service the developers with open analytics in this $30 billion market.
Decision Trees in Artificial-Intelligence.pdfSaikat Basu
Have you heard of something called 'Decision Tree'? It's a simple concept which you can use in life to make decisions. Believe you me, AI also uses it.
Let's find out how it works in this short presentation. #AI #Decisionmaking #Decisions #Artificialintelligence #Data #Analysis
https://saikatbasu.me
By James Francis, CEO of Paradigm Asset Management
In the landscape of urban safety innovation, Mt. Vernon is emerging as a compelling case study for neighboring Westchester County cities. The municipality’s recently launched Public Safety Camera Program not only represents a significant advancement in community protection but also offers valuable insights for New Rochelle and White Plains as they consider their own safety infrastructure enhancements.
40. Features
1. One-click component installations
2. Automatic dependency checks
3. One-click access to install logs
4. Real-time cluster visualization
5. Access to consolidated production logs
Benefits:
1. Easy to get started
2. Ready access to all components
3. Increased developer velocity
Fast Data Platform Manager, for Managing Running Clusters