A tutorial on architecting next generation big data platform by the authors of O'Reilly's Hadoop Application Architectures book. This tutorial discusses how to build a customer 360 (or entity 360) big data application.
Audience: Technical.
Architecting a next-generation data platformhadooparchbook
This document discusses a high-level architecture for analyzing taxi trip data in real-time and batch using Apache Hadoop and streaming technologies. The architecture includes ingesting data from multiple sources using Kafka, processing streaming data using stream processing engines, storing data in data stores like HDFS, and enabling real-time and batch querying and analytics. Key considerations discussed are choosing data transport and stream processing technologies, scaling and reliability, and processing both streaming and batch data.
This document discusses a case study on fraud detection using Hadoop. It begins with an overview of fraud detection requirements, including the need for real-time and near real-time processing of large volumes and varieties of data. It then covers considerations for the system architecture, including using HDFS and HBase for storage, Kafka for ingestion, and Spark and Storm for stream and batch processing. Data modeling with HBase and caching options are also discussed.
Building a fraud detection application using the tools in the Hadoop ecosystem. Presentation given by authors of O'Reilly's Hadoop Application Architectures book at Strata + Hadoop World in San Jose, CA 2016.
Hadoop application architectures - using Customer 360 as an examplehadooparchbook
Hadoop application architectures - using Customer 360 (more generally, Entity 360) as an example. By Ted Malaska, Jonathan Seidman and Mark Grover at Strata + Hadoop World 2016 in NYC.
The document discusses best practices for streaming applications. It covers common streaming use cases like ingestion, transformations, and counting. It also discusses advanced streaming use cases that involve machine learning. The document provides an overview of streaming architectures and compares different streaming engines like Spark Streaming, Flink, Storm, and Kafka Streams. It discusses when to use different storage systems and message brokers like Kafka for ingestion pipelines. The goal is to understand common streaming use cases and their architectures.
What no one tells you about writing a streaming apphadooparchbook
This document discusses 5 things that are often not addressed when writing streaming applications:
1. Managing and monitoring long-running streaming jobs can be challenging as frameworks were not originally designed for streaming workloads. Options include using cluster mode to ensure jobs continue if clients disconnect and leveraging monitoring tools to track metrics.
2. Preventing data loss requires different approaches depending on the data source. File and receiver-based sources benefit from checkpointing while Kafka's commit log ensures data is not lost.
3. Spark Streaming is well-suited for tasks involving windowing, aggregations, and machine learning but may not be needed for all streaming use cases.
4. Achieving exactly-once semantics requires techniques
Architecting application with Hadoop - using clickstream analytics as an examplehadooparchbook
Delivered by Mark Grover at Northern CO Hadoop User Group:
http://www.meetup.com/Northern-Colorado-Big-Data-Meetup/events/224717963/
Top 5 mistakes when writing Streaming applicationshadooparchbook
This document discusses 5 common mistakes when writing streaming applications and provides solutions. It covers: 1) Not shutting down apps gracefully by using thread hooks or external markers to stop processing after batches finish. 2) Assuming exactly-once semantics when things can fail at multiple points requiring offsets and idempotent operations. 3) Using streaming for everything when batch processing is better for some goals. 4) Not preventing data loss by enabling checkpointing and write-ahead logs. 5) Not monitoring jobs by using tools like Spark Streaming UI, Graphite and YARN cluster mode for automatic restarts.
Architecting applications with Hadoop - Fraud Detectionhadooparchbook
This document discusses architectures for fraud detection applications using Hadoop. It provides an overview of requirements for such an application, including the need for real-time alerts and batch processing. It proposes using Kafka for ingestion due to its high throughput and partitioning. HBase and HDFS would be used for storage, with HBase better supporting random access for profiles. The document outlines using Flume, Spark Streaming, and HBase for near real-time processing and alerting on incoming events. Batch processing would use HDFS, Impala, and Spark. Caching profiles in memory is also suggested to improve performance.
Application architectures with Hadoop – Big Data TechCon 2014hadooparchbook
Building applications using Apache Hadoop with a use-case of clickstream analysis. Presented by Mark Grover and Jonathan Seidman at Big Data TechCon, Boston in April 2014
The document discusses using Apache Kafka for event detection pipelines. It describes how Kafka can be used to decouple data pipelines and ingest events from various source systems in real-time. It then provides an example use case of using Kafka, Hadoop, and machine learning for fraud detection in consumer banking, describing the online and offline workflows. Finally, it covers some of the challenges of building such a system and considerations for deploying Kafka.
Solr + Hadoop: Interactive Search for Hadoopgregchanan
This document discusses Cloudera Search, which integrates Apache Solr with Cloudera's distribution of Apache Hadoop (CDH) to provide interactive search capabilities. It describes the architecture of Cloudera Search, including components like Solr, SolrCloud, and Morphlines for extraction and transformation. Methods for indexing data in real-time using Flume or batch using MapReduce are presented. The document also covers querying, security features like Kerberos authentication and collection-level authorization using Sentry, and concludes by describing how to obtain Cloudera Search.
This document summarizes a presentation about streaming data processing with Apache Flink. It discusses how Flink enables real-time analysis and continuous applications. Case studies are presented showing how companies like Bouygues Telecom, Zalando, King.com, and Netflix use Flink for applications like monitoring, analytics, and building a stream processing service. Flink performance is discussed through benchmarks, and features like consistent snapshots and dynamic scaling are mentioned.
Stateful Stream Processing at In-Memory SpeedJamie Grier
This presentation describes results from a real-world system where I used Apache Flink's stateful stream processing capabilities to eliminate the key-value store bottleneck and the burden of the Lambda Architecture while also improving accuracy and gaining huge improvements in hardware efficiency!
Top 5 mistakes when writing Spark applicationsmarkgrover
This document discusses 5 common mistakes people make when writing Spark applications.
The first mistake is improperly sizing Spark executors by not considering factors like the number of cores, amount of memory, and overhead needed. The second mistake is running into the 2GB limit on Spark shuffle blocks, which can cause jobs to fail. The third mistake is not addressing data skew during joins and shuffles, which can cause some tasks to be much slower than others. The fourth mistake is poorly managing the DAG by overusing shuffles, not using techniques like ReduceByKey instead of GroupByKey, and not using complex data types. The fifth mistake is classpath conflicts between the versions of libraries used by Spark and those added by the user.
Processing data from social media streams and sensors in real-time is becoming increasingly prevalent and there are plenty open source solutions to choose from. To help practitioners decide what to use when we compare three popular Apache projects allowing to do stream processing: Apache Storm, Apache Spark and Apache Samza.
This session will go into best practices and detail on how to architect a near real-time application on Hadoop using an end-to-end fraud detection case study as an example. It will discuss various options available for ingest, schema design, processing frameworks, storage handlers and others, available for architecting this fraud detection application and walk through each of the architectural decisions among those choices.
Uber has created a Data Science Workbench to improve the productivity of its data scientists by providing scalable tools, customization, and support. The Workbench provides Jupyter notebooks for interactive coding and visualization, RStudio for rapid prototyping, and Apache Spark for distributed processing. It aims to centralize infrastructure provisioning, leverage Uber's distributed backend, enable knowledge sharing and search, and integrate with Uber's data ecosystem tools. The Workbench manages Docker containers of tools like Jupyter and RStudio running on a Mesos cluster, with files stored in a shared file system. It addresses the problems of wasted time from separate infrastructures and lack of tool standardization across Uber's data science teams.
Open Source Lambda Architecture with Hadoop, Kafka, Samza and DruidDataWorks Summit
This document discusses using an open source Lambda architecture with Kafka, Hadoop, Samza, and Druid to handle event data streams. It describes the problem of interactively exploring large volumes of time series data. It outlines how Druid was developed as a fast query layer for Hadoop to enable low-latency queries over aggregated data. The architecture ingests raw data streams in real-time via Kafka and Samza, aggregates the data in Druid, and enables reprocessing via Hadoop for reliability.
Apache Storm vs. Spark Streaming – two Stream Processing Platforms comparedGuido Schmutz
Storm as well as Spark Streaming are Open-Source Frameworks supporting distributed stream processing. Storm has been developed by Twitter and is a free and open source distributed real-time computation system that can be used with any programming language. It is written primarily in Clojure and supports Java by default. Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala. This presentation shows how you can implement stream processing solutions with the two frameworks, discusses how they compare and highlights the differences and similarities.
How to Boost 100x Performance for Real World Application with Apache Spark-(G...Spark Summit
This document summarizes work done by an Intel software team in China to improve Apache Spark performance for real-world applications. It describes benchmarking tools like HiBench and profiling tools like HiMeter that were developed. It also discusses several case studies where the team worked with customers to optimize joins, manage memory usage, and reduce network bandwidth. The overall goal was to help solve common issues around ease of use, reliability, and scalability for Spark in production environments.
It’s 2017, and big data challenges are as real as they get. Our customers have petabytes of data living in elastic and scalable commodity storage systems such as Azure Data Lake Store and Azure Blob storage.
One of the central questions today is finding insights from data in these storage systems in an interactive manner, at a fraction of the cost.
Interactive Query leverages [Hive on LLAP] in Apache Hive 2.1, brings the interactivity to your complex data warehouse style queries on large datasets stored on commodity cloud storage.
In this session, you will learn how technologies such as Low Latency Analytical Processing [LLAP] and Hive 2.x are making it possible to analyze petabytes of data with sub second latency with common file formats such as csv, json etc. without converting to columnar file formats like ORC/Parquet. We will go deep into LLAP’s performance and architecture benefits and how it compares with Spark and Presto in Azure HDInsight. We also look at how business analysts can use familiar tools such as Microsoft Excel and Power BI, and do interactive query over their data lake without moving data outside the data lake.
Speaker
Ashish Thapliyal, Principal Program Manager, Microsoft Corp
The Next AMPLab: Real-Time, Intelligent, and Secure ComputingSpark Summit
The document discusses the goals of establishing a new research lab called RISELab to develop a secure real-time decision stack that can enable real-time decisions on live data with strong security guarantees. It outlines some of the challenges in building such a system and presents early work on Drizzle, a low-latency streaming engine, and Opaque, which leverages hardware enclaves to provide encryption and hide data access patterns. The goal is to build an open source platform and tools over the next 5 years to enable applications requiring sophisticated, accurate, and robust real-time decisions on private data.
Big Data Anti-Patterns: Lessons From the Front LIneDouglas Moore
This document summarizes common anti-patterns in big data projects based on lessons learned from working with over 50 clients. It identifies anti-patterns in hardware and infrastructure, tooling, and big data warehousing. Specifically, it discusses issues with referencing outdated architectures, using tools improperly for the workload, and de-normalizing schemas without understanding the implications. The document provides recommendations to instead co-locate data and computing, choose the right tools for each job, and deploy solutions matching the intended workload.
Design Patterns For Real Time Streaming Data AnalyticsDataWorks Summit
This document provides an overview of design patterns for real-time streaming data analytics. It discusses architectural patterns like real-time streaming and lambda architecture. It also covers functional patterns like stream joins and top N trends. Additionally, it describes data management patterns such as external lookup, responsive shuffling, and handling out-of-sequence events. The presentation includes examples and benefits of these patterns for building scalable streaming applications.
Architecting a Next Generation Data Platformhadooparchbook
This document discusses a presentation on architecting Hadoop application architectures for a next generation data platform. It provides an overview of the presentation topics which include a case study on using Hadoop for an Internet of Things and entity 360 application. It introduces the key components of the proposed high level architecture including ingesting streaming and batch data using Kafka and Flume, stream processing with Kafka streams and storage in Hadoop.
Top 5 mistakes when writing Spark applicationshadooparchbook
This document discusses common mistakes made when writing Spark applications and provides recommendations to address them. It covers issues like having executors that are too small or large, shuffle blocks exceeding size limits, data skew slowing jobs, and excessive stages. The key recommendations are to optimize executor and partition sizes, increase partitions to reduce skew, use techniques like salting to address skew, and favor transformations like ReduceByKey over GroupByKey to minimize shuffles and memory usage.
Top 5 mistakes when writing Streaming applicationshadooparchbook
This document discusses 5 common mistakes when writing streaming applications and provides solutions. It covers: 1) Not shutting down apps gracefully by using thread hooks or external markers to stop processing after batches finish. 2) Assuming exactly-once semantics when things can fail at multiple points requiring offsets and idempotent operations. 3) Using streaming for everything when batch processing is better for some goals. 4) Not preventing data loss by enabling checkpointing and write-ahead logs. 5) Not monitoring jobs by using tools like Spark Streaming UI, Graphite and YARN cluster mode for automatic restarts.
Architecting applications with Hadoop - Fraud Detectionhadooparchbook
This document discusses architectures for fraud detection applications using Hadoop. It provides an overview of requirements for such an application, including the need for real-time alerts and batch processing. It proposes using Kafka for ingestion due to its high throughput and partitioning. HBase and HDFS would be used for storage, with HBase better supporting random access for profiles. The document outlines using Flume, Spark Streaming, and HBase for near real-time processing and alerting on incoming events. Batch processing would use HDFS, Impala, and Spark. Caching profiles in memory is also suggested to improve performance.
Application architectures with Hadoop – Big Data TechCon 2014hadooparchbook
Building applications using Apache Hadoop with a use-case of clickstream analysis. Presented by Mark Grover and Jonathan Seidman at Big Data TechCon, Boston in April 2014
The document discusses using Apache Kafka for event detection pipelines. It describes how Kafka can be used to decouple data pipelines and ingest events from various source systems in real-time. It then provides an example use case of using Kafka, Hadoop, and machine learning for fraud detection in consumer banking, describing the online and offline workflows. Finally, it covers some of the challenges of building such a system and considerations for deploying Kafka.
Solr + Hadoop: Interactive Search for Hadoopgregchanan
This document discusses Cloudera Search, which integrates Apache Solr with Cloudera's distribution of Apache Hadoop (CDH) to provide interactive search capabilities. It describes the architecture of Cloudera Search, including components like Solr, SolrCloud, and Morphlines for extraction and transformation. Methods for indexing data in real-time using Flume or batch using MapReduce are presented. The document also covers querying, security features like Kerberos authentication and collection-level authorization using Sentry, and concludes by describing how to obtain Cloudera Search.
This document summarizes a presentation about streaming data processing with Apache Flink. It discusses how Flink enables real-time analysis and continuous applications. Case studies are presented showing how companies like Bouygues Telecom, Zalando, King.com, and Netflix use Flink for applications like monitoring, analytics, and building a stream processing service. Flink performance is discussed through benchmarks, and features like consistent snapshots and dynamic scaling are mentioned.
Stateful Stream Processing at In-Memory SpeedJamie Grier
This presentation describes results from a real-world system where I used Apache Flink's stateful stream processing capabilities to eliminate the key-value store bottleneck and the burden of the Lambda Architecture while also improving accuracy and gaining huge improvements in hardware efficiency!
Top 5 mistakes when writing Spark applicationsmarkgrover
This document discusses 5 common mistakes people make when writing Spark applications.
The first mistake is improperly sizing Spark executors by not considering factors like the number of cores, amount of memory, and overhead needed. The second mistake is running into the 2GB limit on Spark shuffle blocks, which can cause jobs to fail. The third mistake is not addressing data skew during joins and shuffles, which can cause some tasks to be much slower than others. The fourth mistake is poorly managing the DAG by overusing shuffles, not using techniques like ReduceByKey instead of GroupByKey, and not using complex data types. The fifth mistake is classpath conflicts between the versions of libraries used by Spark and those added by the user.
Processing data from social media streams and sensors in real-time is becoming increasingly prevalent and there are plenty open source solutions to choose from. To help practitioners decide what to use when we compare three popular Apache projects allowing to do stream processing: Apache Storm, Apache Spark and Apache Samza.
This session will go into best practices and detail on how to architect a near real-time application on Hadoop using an end-to-end fraud detection case study as an example. It will discuss various options available for ingest, schema design, processing frameworks, storage handlers and others, available for architecting this fraud detection application and walk through each of the architectural decisions among those choices.
Uber has created a Data Science Workbench to improve the productivity of its data scientists by providing scalable tools, customization, and support. The Workbench provides Jupyter notebooks for interactive coding and visualization, RStudio for rapid prototyping, and Apache Spark for distributed processing. It aims to centralize infrastructure provisioning, leverage Uber's distributed backend, enable knowledge sharing and search, and integrate with Uber's data ecosystem tools. The Workbench manages Docker containers of tools like Jupyter and RStudio running on a Mesos cluster, with files stored in a shared file system. It addresses the problems of wasted time from separate infrastructures and lack of tool standardization across Uber's data science teams.
Open Source Lambda Architecture with Hadoop, Kafka, Samza and DruidDataWorks Summit
This document discusses using an open source Lambda architecture with Kafka, Hadoop, Samza, and Druid to handle event data streams. It describes the problem of interactively exploring large volumes of time series data. It outlines how Druid was developed as a fast query layer for Hadoop to enable low-latency queries over aggregated data. The architecture ingests raw data streams in real-time via Kafka and Samza, aggregates the data in Druid, and enables reprocessing via Hadoop for reliability.
Apache Storm vs. Spark Streaming – two Stream Processing Platforms comparedGuido Schmutz
Storm as well as Spark Streaming are Open-Source Frameworks supporting distributed stream processing. Storm has been developed by Twitter and is a free and open source distributed real-time computation system that can be used with any programming language. It is written primarily in Clojure and supports Java by default. Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala. This presentation shows how you can implement stream processing solutions with the two frameworks, discusses how they compare and highlights the differences and similarities.
How to Boost 100x Performance for Real World Application with Apache Spark-(G...Spark Summit
This document summarizes work done by an Intel software team in China to improve Apache Spark performance for real-world applications. It describes benchmarking tools like HiBench and profiling tools like HiMeter that were developed. It also discusses several case studies where the team worked with customers to optimize joins, manage memory usage, and reduce network bandwidth. The overall goal was to help solve common issues around ease of use, reliability, and scalability for Spark in production environments.
It’s 2017, and big data challenges are as real as they get. Our customers have petabytes of data living in elastic and scalable commodity storage systems such as Azure Data Lake Store and Azure Blob storage.
One of the central questions today is finding insights from data in these storage systems in an interactive manner, at a fraction of the cost.
Interactive Query leverages [Hive on LLAP] in Apache Hive 2.1, brings the interactivity to your complex data warehouse style queries on large datasets stored on commodity cloud storage.
In this session, you will learn how technologies such as Low Latency Analytical Processing [LLAP] and Hive 2.x are making it possible to analyze petabytes of data with sub second latency with common file formats such as csv, json etc. without converting to columnar file formats like ORC/Parquet. We will go deep into LLAP’s performance and architecture benefits and how it compares with Spark and Presto in Azure HDInsight. We also look at how business analysts can use familiar tools such as Microsoft Excel and Power BI, and do interactive query over their data lake without moving data outside the data lake.
Speaker
Ashish Thapliyal, Principal Program Manager, Microsoft Corp
The Next AMPLab: Real-Time, Intelligent, and Secure ComputingSpark Summit
The document discusses the goals of establishing a new research lab called RISELab to develop a secure real-time decision stack that can enable real-time decisions on live data with strong security guarantees. It outlines some of the challenges in building such a system and presents early work on Drizzle, a low-latency streaming engine, and Opaque, which leverages hardware enclaves to provide encryption and hide data access patterns. The goal is to build an open source platform and tools over the next 5 years to enable applications requiring sophisticated, accurate, and robust real-time decisions on private data.
Big Data Anti-Patterns: Lessons From the Front LIneDouglas Moore
This document summarizes common anti-patterns in big data projects based on lessons learned from working with over 50 clients. It identifies anti-patterns in hardware and infrastructure, tooling, and big data warehousing. Specifically, it discusses issues with referencing outdated architectures, using tools improperly for the workload, and de-normalizing schemas without understanding the implications. The document provides recommendations to instead co-locate data and computing, choose the right tools for each job, and deploy solutions matching the intended workload.
Design Patterns For Real Time Streaming Data AnalyticsDataWorks Summit
This document provides an overview of design patterns for real-time streaming data analytics. It discusses architectural patterns like real-time streaming and lambda architecture. It also covers functional patterns like stream joins and top N trends. Additionally, it describes data management patterns such as external lookup, responsive shuffling, and handling out-of-sequence events. The presentation includes examples and benefits of these patterns for building scalable streaming applications.
Architecting a Next Generation Data Platformhadooparchbook
This document discusses a presentation on architecting Hadoop application architectures for a next generation data platform. It provides an overview of the presentation topics which include a case study on using Hadoop for an Internet of Things and entity 360 application. It introduces the key components of the proposed high level architecture including ingesting streaming and batch data using Kafka and Flume, stream processing with Kafka streams and storage in Hadoop.
Top 5 mistakes when writing Spark applicationshadooparchbook
This document discusses common mistakes made when writing Spark applications and provides recommendations to address them. It covers issues like having executors that are too small or large, shuffle blocks exceeding size limits, data skew slowing jobs, and excessive stages. The key recommendations are to optimize executor and partition sizes, increase partitions to reduce skew, use techniques like salting to address skew, and favor transformations like ReduceByKey over GroupByKey to minimize shuffles and memory usage.
This document discusses a presentation on fraud detection application architectures using Hadoop. It provides an overview of different fraud use cases and challenges in implementing Hadoop-based solutions. Requirements for the applications include handling high volumes, velocities and varieties of data, generating real-time alerts with low latency, and performing both stream and batch processing. A high-level architecture is proposed using Hadoop, HBase, HDFS, Kafka and Spark to meet the requirements. Storage layer choices and considerations are also discussed.
The document provides an agenda and slides for a presentation on architectural considerations for data warehousing with Hadoop. The presentation discusses typical data warehouse architectures and challenges, how Hadoop can complement existing architectures, and provides an example use case of implementing a data warehouse with Hadoop using the Movielens dataset. Key aspects covered include ingestion of data from various sources using tools like Flume and Sqoop, data modeling and storage formats in Hadoop, processing the data using tools like Hive and Spark, and exporting results to a data warehouse.
Strata San Jose 2017 - Ben Sharma PresentationZaloni
The document discusses creating a modern data architecture using a data lake. It describes Zaloni as a provider of data lake management solutions, including a data lake management and governance platform and self-service data platform. It outlines key features of a data lake such as storing different types of data, creating standardized datasets, and providing shorter time to insights. The document also discusses Zaloni's data lake maturity model and reference architecture.
The document discusses application architectures using Hadoop. It provides an example case study of clickstream analysis of web logs. It discusses challenges of Hadoop implementation and various architectural considerations for data storage, modeling, ingestion, processing and what specific processing needs to happen for the case study. These include sessionization, filtering, and business intelligence/discovery. Storage options, file formats, schema design, and processing engines like MapReduce, Spark and Impala are also covered.
This document discusses application architectures using Hadoop. It provides an example case study of clickstream analysis. It covers challenges of Hadoop implementation and various architectural considerations for data storage and modeling, data ingestion, and data processing. For data processing, it discusses different processing engines like MapReduce, Pig, Hive, Spark and Impala. It also discusses what specific processing needs to be done for the clickstream data like sessionization and filtering.
Impala Architecture Presentation at Toronto Hadoop User Group, in January 2014 by Mark Grover.
Event details:
http://www.meetup.com/TorontoHUG/events/150328602/
Top 5 mistakes when writing Spark applicationshadooparchbook
This document discusses common mistakes people make when writing Spark applications and provides recommendations to address them. It covers issues related to executor configuration, application failures due to shuffle block sizes exceeding limits, slow jobs caused by data skew, and managing the DAG to avoid excessive shuffles and stages. Recommendations include using smaller executors, increasing the number of partitions, addressing skew through techniques like salting, and preferring ReduceByKey over GroupByKey and TreeReduce over Reduce to improve performance and resource usage.
The document discusses architectural considerations for Hadoop applications based on a case study of clickstream analysis. It covers requirements for data ingestion, storage, processing, and orchestration. For data storage, it recommends storing raw clickstream data in HDFS using the Avro file format with Snappy compression. For processed data, it recommends using the Parquet columnar storage format to enable efficient analytical queries. The document also discusses partitioning strategies and HDFS directory layout design.
Hadoop Application Architectures tutorial at Big DataService 2015hadooparchbook
This document outlines a presentation on architectural considerations for Hadoop applications. It introduces the presenters who are experts from Cloudera and contributors to Apache Hadoop projects. It then discusses a case study on clickstream analysis, how this was challenging before Hadoop due to data storage limitations, and how Hadoop provides a better solution by enabling active archiving of large volumes and varieties of data at scale. Finally, it covers some of the challenges in implementing Hadoop, such as choices around storage managers, data modeling and file formats, data movement workflows, metadata management, and data access and processing frameworks.
This document discusses how Hadoop can be used in data warehousing and analytics. It begins with an overview of data warehousing and analytical databases. It then describes how organizations traditionally separate transactional and analytical systems and use extract, transform, load processes to move data between them. The document proposes using Hadoop as an alternative to traditional data warehousing architectures by using it for extraction, transformation, loading, and even serving analytical queries.
Best Practices for the Hadoop Data Warehouse: EDW 101 for Hadoop ProfessionalsCloudera, Inc.
The enormous legacy of EDW experience and best practices can be adapted to the unique capabilities of the Hadoop environment. In this webinar, in a point-counterpoint format, Dr. Kimball will describe standard data warehouse best practices including the identification of dimensions and facts, managing primary keys, and handling slowly changing dimensions (SCDs) and conformed dimensions. Eli Collins, Chief Technologist at Cloudera, will describe how each of these practices actually can be implemented in Hadoop.
Organizations across diverse industries are in pursuit of Customer 360, by integrating customer information across multiple channels, systems, devices and products. Having a 360-degree view of the customer enables enterprises to improve the interaction experience, drive customer loyalty and improve retention. However delivering a true Customer 360 can be very challenging.
Big data architectures and the data lakeJames Serra
The document provides an overview of big data architectures and the data lake concept. It discusses why organizations are adopting data lakes to handle increasing data volumes and varieties. The key aspects covered include:
- Defining top-down and bottom-up approaches to data management
- Explaining what a data lake is and how Hadoop can function as the data lake
- Describing how a modern data warehouse combines features of a traditional data warehouse and data lake
- Discussing how federated querying allows data to be accessed across multiple sources
- Highlighting benefits of implementing big data solutions in the cloud
- Comparing shared-nothing, massively parallel processing (MPP) architectures to symmetric multi-processing (
This document discusses how Hadoop can be used to power a data lake and enhance traditional data warehousing approaches. It proposes a holistic data strategy with multiple layers: a landing area to store raw source data, a data lake to enrich and integrate data with light governance, a data science workspace for experimenting with new data, and a big data warehouse at the top level with fully governed and trusted data. Hadoop provides distributed storage and processing capabilities to support these layers. The document advocates a "polygot" approach, using the right tools like Hadoop, relational databases, and cloud platforms depending on the specific workload and data type.
Turning Data into Business Value with a Modern Data PlatformCloudera, Inc.
The document discusses how data has become a strategic asset for businesses and how a modern data platform can help organizations drive customer insights, improve products and services, lower business risks, and modernize IT. It provides examples of companies using analytics to personalize customer solutions, detect sepsis early to save lives, and protect the global finance system. The document also outlines the evolution of Hadoop platforms and how Cloudera Enterprise provides a common workload pattern to store, process, and analyze data across different workloads and databases in a fast, easy, and secure manner.
1. HCFS stands for Hadoop Compatible File System. It allows Hadoop to access cloud storage systems like AWS S3, Azure Blob Storage, and Ceph.
2. AWS S3 supports three implementations - s3:, s3n:, and s3a:. S3 cannot replace HDFS due to consistency issues but is commonly used with EMR.
3. Azure Blob Storage uses the wasbs:// scheme and hadoop-azure.jar. It supports multiple accounts and page/block blobs but lacks append and permissions.
4. CephFS can be used with Hadoop but has limited official support to Hadoop 1.1.x due to JNI issues with later versions
Architecting a Next Gen Data Platform – Strata London 2018Jonathan Seidman
This document summarizes a presentation on architecting data platforms given at the Strata Data Conference in London 2018. The presentation discusses building a customer 360 view using streaming vehicle and other IoT data. It outlines the requirements to support real-time querying, batch processing, and analytics. The high-level architecture shown includes data sources, streaming pipelines, storage systems, and processing engines. Key challenges discussed are reliably ingesting multiple data types and scaling to support various workloads and access patterns.
Architecting a Next Generation Data Platform – Strata Singapore 2017Jonathan Seidman
This document discusses the high-level architecture for a data platform to support a customer 360 view using data from connected vehicles (taxis). The architecture includes data sources, streaming data ingestion using Kafka, schema validation, stream processing for transformations and routing, and storage for analytics, search and long-term retention. The presentation covers design considerations for reliability, scalability and processing of both streaming and batch data to meet requirements like querying, visualization, and batch processing of historical data.
Architecting a Next Gen Data Platform – Strata New York 2018Jonathan Seidman
Using Customer 360 and the internet of things as examples, this tutorial explains how to architect a modern, real-time big data platform leveraging recent advancements in the open source software world, including components like Kafka, Flink, Kudu, Spark Streaming, and Spark SQL and modern storage engines to enable new forms of data processing and analytics.
Lessons from Building Large-Scale, Multi-Cloud, SaaS Software at DatabricksDatabricks
The cloud has become one of the most attractive ways for enterprises to purchase software, but it requires building products in a very different way from traditional software
Databricks Meetup @ Los Angeles Apache Spark User GroupPaco Nathan
This document summarizes a presentation on Apache Spark and Spark Streaming. It provides an overview of Spark, describing it as an in-memory cluster computing framework. It then discusses Spark Streaming, explaining that it runs streaming computations as small batch jobs to provide low latency processing. Several use cases for Spark Streaming are presented, including from companies like Stratio, Pearson, Ooyala, and Sharethrough. The presentation concludes with a demonstration of Python Spark Streaming code.
First in Class: Optimizing the Data Lake for Tighter IntegrationInside Analysis
The Briefing Room with Dr. Robin Bloor and Teradata RainStor
Live Webcast October 13, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=012bb2c290097165911872b1f241531d
Hadoop data lakes are emerging as peers to corporate data warehouses. However, successful data management solutions require a fusion of all relevant data, new and old, which has proven challenging for many companies. With a data lake that’s been optimized for fast queries, solid governance and lifecycle management, users can take data management to a whole new level.
Register for this episode of The Briefing Room to learn from veteran Analyst Dr. Robin Bloor as he discusses the relevance of data lakes in today’s information landscape. He’ll be briefed by Mark Cusack of Teradata, who will explain how his company’s archiving solution has developed into a storage point for raw data. He’ll show how the proven compression, scalability and governance of Teradata RainStor combined with Hadoop can enable an optimized data lake that serves as both reservoir for historical data and as a "system of record” for the enterprise.
Visit InsideAnalysis.com for more information.
Adding structure to your streaming pipelines: moving from Spark streaming to ...DataWorks Summit
How do you go from a strictly typed object-based streaming pipeline with simple operations to a structured streaming pipeline with higher order complex relational operations? This is what the Data Engineering team did at GoPro to scale up the development of streaming pipelines for the rapidly growing number of devices and applications.
When big data frameworks such as Hadoop first came to exist, developers were happy because we could finally process large amounts of data without writing complex multi-threaded code or worse yet writing complicated distributed code. Unfortunately, only very simple operations were available such as map and reduce. Almost immediately, higher level operations were desired similar to relational operations. And so Hive and dozens (hundreds?) of SQL-based big data tools became available for more developer-efficient batch processing of massive amounts of data.
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world, so that nearly every streaming framework now supports higher level relational operations.
In this talk, we will discuss in a very hands-on manner how the streaming data pipelines for GoPro devices and apps have moved from the original Spark streaming with its simple RDD-based operations in Spark 1.x to Spark's structured streaming with its higher level relational operations in Spark 2.x. We will talk about the differences, advantages, and necessary pain points that must be addressed in order to scale relational-based streaming pipelines for massive IoT streams. We will also talk about moving from “hand built” Hadoop/Spark clusters running in the cloud to using a Spark-based cloud service. DAVID WINTERS, Big Data Architect, GoPro and HAO ZOU, Senior Software Engineer, GoPro
In Memory Data Pipeline And Warehouse At Scale - BerlinBuzzwords 2015Iulia Emanuela Iancuta
The document describes an in-memory data pipeline and warehouse using Spark, Spark SQL, Tachyon and Parquet. It involves ingesting financial transaction data from S3, transforming the data through cleaning and joining steps, and building a data warehouse using Spark SQL and Parquet for querying. Key aspects covered include distributing metadata lookups, balancing data partitions, broadcasting joins to avoid skew, caching data in Tachyon and Jaws for a RESTful interface to Spark SQL.
Five Fabulous Sinks for Your Kafka Data. #3 will surprise you! (Rachel Pedres...confluent
Apache Kafka has come the modern central point for a fast and scalable streaming platform. Now, thanks to the open source explosion over the last decade, there are now numerous data stores available as sinks for Kafka-brokered data, from search to document stores, columnular DBs, time series DBs and more. While many claim they are the swiss army knife, in reality each is designed for specific types of data and analytics approaches. In this talk, we will cover the taxonomy of various data sinks, delve into each categories pros, cons and ideal use cases, so you can select the right ones and tie them together with Kafka into a well-considered architecture.
Hybrid Transactional/Analytics Processing with Spark and IMDGsAli Hodroj
This document discusses hybrid transactional/analytical processing (HTAP) with Apache Spark and in-memory data grids. It begins by introducing the speaker and GigaSpaces. It then discusses how modern applications require both online transaction processing and real-time operational intelligence. The document presents examples from retail and IoT and the goals of minimizing latency while maximizing data analytics locality. It provides an overview of in-memory computing options and describes how GigaSpaces uses an in-memory data grid combined with Spark to achieve HTAP. The document includes deployment diagrams and discusses data grid RDDs and pushing predicates to the data grid. It describes how this was productized as InsightEdge and provides additional innovations and reference architectures.
First presentation for Savi's sponsorship of the Washington DC Spark Interactive. Discusses tips and lessons learned using Spark Streaming (24x7) to ingest and analyze Industrial Internet of Things (IIoT) data as part of a Lambda Architecture
A Practical Guide to Selecting a Stream Processing Technology confluent
Presented by Michael Noll, Product Manager, Confluent.
Why are there so many stream processing frameworks that each define their own terminology? Are the components of each comparable? Why do you need to know about spouts or DStreams just to process a simple sequence of records? Depending on your application’s requirements, you may not need a full framework at all.
Processing and understanding your data to create business value is the ultimate goal of a stream data platform. In this talk we will survey the stream processing landscape, the dimensions along which to evaluate stream processing technologies, and how they integrate with Apache Kafka. Particularly, we will learn how Kafka Streams, the built-in stream processing engine of Apache Kafka, compares to other stream processing systems that require a separate processing infrastructure.
Pivotal - Advanced Analytics for Telecommunications Hortonworks
Innovative mobile operators need to mine the vast troves of unstructured data now available to them to help develop compelling customer experiences and uncover new revenue opportunities. In this webinar, you’ll learn how HDB’s in-database analytics enable advanced use cases in network operations, customer care, and marketing for better customer experience. Join us, and get started on your advanced analytics journey today!
Off-Label Data Mesh: A Prescription for Healthier DataHostedbyConfluent
"Data mesh is a relatively recent architectural innovation, espoused as one of the best ways to fix analytic data. We renegotiate aged social conventions by focusing on treating data as a product, with a clearly defined data product owner, akin to that of any other product. In addition, we focus on building out a self-service platform with integrated governance, letting consumers safely access and use the data they need to solve their business problems.
Data mesh is prescribed as a solution for _analytical data_, so that conventionally analytical results (think weekly sales or monthly revenue reports) can be more accurately and predictably computed. But what about non-analytical business operations? Would they not also benefit from data products backed by self-service capabilities and dedicated owners? If you've ever provided a customer with an analytical report that differed from their operational conclusions, then this talk is for you.
Adam discusses the resounding successes he has seen from applying data mesh _off-label_ to both analytical and operational domains. The key? Event streams. Well-defined, incrementally updating data products that can power both real-time and batch-based applications, providing a single source of data for a wide variety of application and analytical use cases. Adam digs into the common areas of success seen across numerous clients and customers and provides you with a set of practical guidelines for implementing your own minimally viable data mesh.
Finally, Adam covers the main social and technical hurdles that you'll encounter as you implement your own data mesh. Learn about important data use cases, data domain modeling techniques, self-service platforms, and building an iteratively successful data mesh."
Apache Spark 2.4 comes packed with a lot of new functionalities and improvements, including the new barrier execution mode, flexible streaming sink, the native AVRO data source, PySpark’s eager evaluation mode, Kubernetes support, higher-order functions, Scala 2.12 support, and more.
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfMohamedAbdelkader115
Glad to be one of only 14 members inside Kuwait to hold this credential.
Please check the members inside kuwait from this link:
https://www.rics.org/networking/find-a-member.html?firstname=&lastname=&town=&country=Kuwait&member_grade=(AssocRICS)&expert_witness=&accrediation=&page=1
π0.5: a Vision-Language-Action Model with Open-World GeneralizationNABLAS株式会社
今回の資料「Transfusion / π0 / π0.5」は、画像・言語・アクションを統合するロボット基盤モデルについて紹介しています。
拡散×自己回帰を融合したTransformerをベースに、π0.5ではオープンワールドでの推論・計画も可能に。
This presentation introduces robot foundation models that integrate vision, language, and action.
Built on a Transformer combining diffusion and autoregression, π0.5 enables reasoning and planning in open-world settings.
ELectronics Boards & Product Testing_Shiju.pdfShiju Jacob
This presentation provides a high level insight about DFT analysis and test coverage calculation, finalizing test strategy, and types of tests at different levels of the product.
Raish Khanji GTU 8th sem Internship Report.pdfRaishKhanji
This report details the practical experiences gained during an internship at Indo German Tool
Room, Ahmedabad. The internship provided hands-on training in various manufacturing technologies, encompassing both conventional and advanced techniques. Significant emphasis was placed on machining processes, including operation and fundamental
understanding of lathe and milling machines. Furthermore, the internship incorporated
modern welding technology, notably through the application of an Augmented Reality (AR)
simulator, offering a safe and effective environment for skill development. Exposure to
industrial automation was achieved through practical exercises in Programmable Logic Controllers (PLCs) using Siemens TIA software and direct operation of industrial robots
utilizing teach pendants. The principles and practical aspects of Computer Numerical Control
(CNC) technology were also explored. Complementing these manufacturing processes, the
internship included extensive application of SolidWorks software for design and modeling tasks. This comprehensive practical training has provided a foundational understanding of
key aspects of modern manufacturing and design, enhancing the technical proficiency and readiness for future engineering endeavors.
Passenger car unit (PCU) of a vehicle type depends on vehicular characteristics, stream characteristics, roadway characteristics, environmental factors, climate conditions and control conditions. Keeping in view various factors affecting PCU, a model was developed taking a volume to capacity ratio and percentage share of particular vehicle type as independent parameters. A microscopic traffic simulation model VISSIM has been used in present study for generating traffic flow data which some time very difficult to obtain from field survey. A comparison study was carried out with the purpose of verifying when the adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and multiple linear regression (MLR) models are appropriate for prediction of PCUs of different vehicle types. From the results observed that ANFIS model estimates were closer to the corresponding simulated PCU values compared to MLR and ANN models. It is concluded that the ANFIS model showed greater potential in predicting PCUs from v/c ratio and proportional share for all type of vehicles whereas MLR and ANN models did not perform well.
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...Infopitaara
A Boiler Feed Pump (BFP) is a critical component in thermal power plants. It supplies high-pressure water (feedwater) to the boiler, ensuring continuous steam generation.
⚙️ How a Boiler Feed Pump Works
Water Collection:
Feedwater is collected from the deaerator or feedwater tank.
Pressurization:
The pump increases water pressure using multiple impellers/stages in centrifugal types.
Discharge to Boiler:
Pressurized water is then supplied to the boiler drum or economizer section, depending on design.
🌀 Types of Boiler Feed Pumps
Centrifugal Pumps (most common):
Multistage for higher pressure.
Used in large thermal power stations.
Positive Displacement Pumps (less common):
For smaller or specific applications.
Precise flow control but less efficient for large volumes.
🛠️ Key Operations and Controls
Recirculation Line: Protects the pump from overheating at low flow.
Throttle Valve: Regulates flow based on boiler demand.
Control System: Often automated via DCS/PLC for variable load conditions.
Sealing & Cooling Systems: Prevent leakage and maintain pump health.
⚠️ Common BFP Issues
Cavitation due to low NPSH (Net Positive Suction Head).
Seal or bearing failure.
Overheating from improper flow or recirculation.
Concept of Problem Solving, Introduction to Algorithms, Characteristics of Algorithms, Introduction to Data Structure, Data Structure Classification (Linear and Non-linear, Static and Dynamic, Persistent and Ephemeral data structures), Time complexity and Space complexity, Asymptotic Notation - The Big-O, Omega and Theta notation, Algorithmic upper bounds, lower bounds, Best, Worst and Average case analysis of an Algorithm, Abstract Data Types (ADT)
4. Questions? tiny.cloudera.com/app-arch-questions
About the presenters
▪ Technical Group Architect at
Blizzard Entertainment
▪ Previously Principal Solutions
Architect at Cloudera, lead
architect at FINRA
▪ Contributor to Apache
Hadoop, HBase, Flume, Avro,
Pig, Spark, YARN, Sqoop,
Kudu, Kafka
Ted Malaska
5. Questions? tiny.cloudera.com/app-arch-questions
About the presenters
▪ Partner Software Engineer at
Cloudera
▪ Contributor to Apache Sqoop.
▪ Previously Technical Lead on
the big data team at Orbitz,
co-founder of the Chicago
Hadoop User Group and
Chicago Big Data
Jonathan Seidman
6. Questions? tiny.cloudera.com/app-arch-questions
About the presenters
▪ Software Engineer on Spark
at Cloudera
▪ Committer on Apache Bigtop,
PMC member on Apache
Sentry(incubating)
▪ Contributor to Apache Spark,
Hadoop, Hive, Sqoop, Pig,
Flume
Mark Grover
20. Questions? tiny.cloudera.com/app-arch-questions
Requirements
▪ To support all this, we need:
- Reliable ingestion of streaming and batch data.
- Ability to perform transformations on streaming data in flight.
- Ability to perform sophisticated processing of historical data.
29. Questions? tiny.cloudera.com/app-arch-questions
High level architecture
Source Transport Stream
Processing
Storage Access
Custom
Producer
or
Processing &
Ingestion
Engine
Nested
Tables
Indexed
Cube
Relational
Tables
Entity Time
Series Lookup
Batch
Processing
SQL
NRT Rest
NRT
Dashboard
30. Questions? tiny.cloudera.com/app-arch-questions
Buffering Data – Flume vs. Kafka
▪ Flume – well integrated with Hadoop.
- Great choice when ingesting data into HDFS.
- Can support simple transformations.
- Less coding.
▪ But…
- Interface between Kafka and the streaming layer is already well defined.
- Transformations are done in the stream processing layer.
- We need a more general purpose system at this layer.
33. Questions? tiny.cloudera.com/app-arch-questions
Topics and Partitions
▪ Messages are organized into topics, and each topic is split into partitions.
- Each partition is an immutable, time-sequenced log of messages on disk.
- Note that time ordering is guaranteed within, but not across, partitions.
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
Partition 0
Partition 1
Partition 2
Data
Source
Topic
52. Questions? tiny.cloudera.com/app-arch-questions
Guarding Against Message Loss
▪ Producer – What happens if the producer loses connection to Kafka and the buffer
overflows?
- Consider a producer side buffer (e.g. Flume).
▪ Source – What happens if events are lost before getting sent to producer?
- Once again use some kind of buffer to provide sufficient retention of data.
57. Questions? tiny.cloudera.com/app-arch-questions
What do we mean by streaming?
Constant low
milliseconds & under
Low milliseconds to
seconds, delay in
case of failures
10s of seconds or
more, re-run in case
of failures
Real-time Near real-time Batch
58. Questions? tiny.cloudera.com/app-arch-questions
What do we mean by streaming?
Constant low
milliseconds & under
Low milliseconds to
seconds, delay in
case of failures
10s of seconds or
more, re-run in case
of failures
Real-time Near real-time Batch
59. Questions? tiny.cloudera.com/app-arch-questions
But, there’s no free lunch
Constant low
milliseconds & under
Low milliseconds to
seconds, delay in
case of failures
10s of seconds or
more, re-run in case
of failures
Real-time Near real-time Batch
“Difficult” architectures, lower
latency
“Easier” architectures, higher
latency
66. Questions? tiny.cloudera.com/app-arch-questions
#1 – Simple Ingestion
1. Zero transformation
- No transformation, plain ingest
- Keep the original format – SequenceFile, Text, etc.
- Allows to store data that may have errors in the schema
2. Format transformation
- Simply change the format of the field
- To a structured format, say, Avro, for example
- Can do schema validation
3. Atomic transformation
- Mask a credit card number
68. Questions? tiny.cloudera.com/app-arch-questions
Where to store the context?
1. Locally Broadcast Cached Dim Data
- Local to Process (On Heap, Off Heap)
- Local to Node (Off Process)
2. Partitioned Cache
- Shuffle to move new data to partitioned cache
3. External Fetch Data (e.g. HBase, Memcached)
84. Questions? tiny.cloudera.com/app-arch-questions
Flink
▪ True “streaming” system, but not as feature rich as Spark
▪ Much better event time handling
▪ Good built-in backpressure support
▪ Allows stateful transformations
▪ Lower Latency
- No Micro Batching
- Asynchronous Barrier Snapshotting (ABS)
91. Questions? tiny.cloudera.com/app-arch-questions
Flume
▪ Well integrated with the Hadoop ecosystem
▪ Allowed interceptors (for simple transformations)
▪ Supports buffering
- Memory
- File
- Kafka
▪ But no real fault-tolerance
▪ No state management
95. Questions? tiny.cloudera.com/app-arch-questions
Spark Streaming
▪ We chose Spark Streaming because:
- Same execution engine for batch and streaming
- Similar code for batch and streaming
- Support for security, kafka integration
- Thriving community
- We don’t have low millisecond requirements
96. Questions? tiny.cloudera.com/app-arch-questions
High level architecture
Source Transport Stream
Processing
Storage Access
Custom
Producer
or
Nested
Tables
Indexed
Cube
Relational
Tables
Entity Time
Series Lookup
Batch
Processing
SQL
NRT REST
NRT
Dashboard
100. Questions? tiny.cloudera.com/app-arch-questions
Structured Landing Zones
Hive Relational Model
Kudu/HDFS
Hive Nested Model
HDFS
Aggregations
Kudu
HBase Entity Time
Series
Solr
Traditional SQL
Optimized for nested Structures like JSON
Optimized Storing and mutating aggregates
Optimized Entity 360 and time base access
Optimized faceted charts and reverse index look
ups
102. Questions? tiny.cloudera.com/app-arch-questions
Kudu Data Models
▪ Entity Summary Tables
- Quick update and access of aggregate of Entity Stats
▪ Event Tables
- Number of Partitioning strategies
- Partition by Entity
- Partition by Hash on time
106. Questions? tiny.cloudera.com/app-arch-questions
View Strategies
Hive Relational Model
Hive Nested Model
Models
Hive Normal Views
Hive Materialized Table
Views
Use in the cases where the view requires
a join that is done through a shuffle
Use only for tables that filter
records/columns or use for marking fields
107. Questions? tiny.cloudera.com/app-arch-questions
Nested
▪ Less Space than Denormalization
▪ Still have tables but the cost of joins is all but gone
▪ Also great for cartesian joins
- N x M vs N + M
▪ Not really supported yet with Kudu or HBase with SQL
119. Questions? tiny.cloudera.com/app-arch-questions
Solr: Data Model
▪ Think of it like a cube on a object type
- In our case a taxi trip
- Allows for rollups and aggregations from object’s point of view
- Think of objects as immutable
- Try to find time based events
- May design more than one object type
122. Questions? tiny.cloudera.com/app-arch-questions
Multi Value Aggregations
▪ Ordered Merge Join
- Think like a zipper
- Scans
- No Lookups
▪ Top N from both sides
- Leaving the rest to other
▪ Indexes distributed
▪ No need to read document data
1 4 5 7 8 9 10 14 16
2 3 6 11 12 13 15 17 18
1 2 3 6 7 8 10 15 18
Cash
Credit
Vender A
4 5 9 11 12 13 14 16 17Vender B
126. Questions? tiny.cloudera.com/app-arch-questions
High level architecture
Source Transport Stream
Processing
Storage Access
Custom
Producer
or
Nested
Tables
Indexed
Cube
Relational
Tables
Entity Time
Series Lookup
Batch
Processing
SQL
NRT REST
NRT
Dashboard
127. Questions? tiny.cloudera.com/app-arch-questions
Why have batch processing?
▪ When you need a larger context
- Say, to train a model
▪ Complex periodic job that does something
- Convert data to a nested structure for reduced number of shuffles
▪ In our use-case,
- Kudu -> HDFS Nested is batch processing
- KMeans calculation is also in bash
137. Questions? tiny.cloudera.com/app-arch-questions
Why have REST server?
▪ Tired of business people telling us how to access data
▪ Serves as an interface between the data engineers and business folks
▪ Lets business folks decide access patterns
▪ Engineers to optimize those patterns
▪ Brownie points from your boss
▪ And, it’s not that difficult to write!
138. Questions? tiny.cloudera.com/app-arch-questions
Don’t believe me?
import org.mortbay.jetty.Server
import org.mortbay.jetty.servlet.{Context, ServletHolder}
…
val server = new Server(port)
val sh = new ServletHolder(classOf[ServletContainer])
sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass",
"com.sun.jersey.api.core.PackagesResourceConfig")
sh.setInitParameter("com.sun.jersey.config.property.packages",
"com.hadooparchitecturebook.taxi360.server.hbase")
sh.setInitParameter("com.sun.jersey.api.json.POJOMappingFeature", "true”)
val context = new Context(server, "/", Context.SESSIONS)
context.addServlet(sh, "/*”)
server.start()
server.join()
146. Questions? tiny.cloudera.com/app-arch-questions
SQL engine criteria
▪ Low latency SQL access
▪ Allows for high concurrency
▪ JDBC/ODBC integration
▪ Capable of large scale aggregation
▪ Optionally integrates with Kudu for real-time updates to SQL tables