SlideShare a Scribd company logo
Real time fraud detection at
1+M scale on hadoop stack
Ishan Chhabra
Nitin Aggarwal
Rocketfuel Inc
Agenda
• Rocketfuel & Advertising auction process
• Various kinds of frauds
• Problem statement
• Helios: Architecture
• Implementation in Hadoop Ecosystem
• Details about HDFS spout and datacube
• Key takeaways
Rocketfuel Inc
• AdTech firm that enables marketers using AI & Big Data
• Scores 120+ Billion Ad auctions in a day
• Handles 1-2 Million TPS at peak traffic
Auction Process
(4b) Notification(5) Record impression
Exchange - Rocketfuel discrepancy
(4b) Notification(5) Record impression
count(4b) != count(5)
Rocketfuel - Advertiser discrepancy
(5) Record impression
count(5) != count(6)
Common causes
• Fraud
– Bot networks and malware
– Hidden ad slots
• Human error
– AD JavaScript site or browser specific issues
– Bugs in Ad JavaScript
– 3rd-party JavaScript interactions in Ad or site
Need for real time
• Micro-patterns that change frequently
• Latency has big business impact; delays in reacting leads to
loss of money
• A lot of times discrepancies arise due to breakages and
sudden unexpected changes
Goal: Significantly reduce money loss from both ends by
reacting to these micropatterns in near real time
Data flow
x2
x2
x2
x2
Bidding Sites
Analytics Site
Data flow
Bids & Notifications
(batched and delayed)
Impressions
(near real time)
Bidding SiteAnalytics Site
Problem statement
• 3 streams with various delays (2 from HDFS, 1 from Kafka)
• Join and aggregate
• Filter among 2^n feature combinations to identify the top
culprits (OLAP cube)
• Feedback into bidding
Lambda architecture
Logs
Storm & HBase on
YARN (Slider)
Serving Infra
(Bidders and Ad-
servers)
Near real-time pipeline
Batch pipeline
Helios: Abstraction for real time learning
• Real time processing of data streams from sources like Kafka
and HDFS, with efficient join
• Process joined event views to generate different analytics,
using HBase and MapReduce
• OLAP support
• Join with dimensional data; different use-cases
Logs
Storm Cluster
(Slider and YARN)
HBase Cluster
(Slider and YARN)
Serving Infra
(Bidders and Ad-servers)
Helios architecture
OLAP
Metrics
Step 1a: Ingesting events from Kafka
Logs
Storm Cluster
(Slider and YARN)
Serving Infra
(Bidders and Ad-servers)
Processing Kafka events in real-time
• Relies on logs streams written to Kafka by scribe
• Kafka Topic with 200+ partitions
• Data produced and written via scribe from more than 3K
nodes
• Using upstream Kafka spout to read data
– Spout granularity is at record-level
– Uses Zookeeper extensively for book-keeping
Processing Kafka events in real-time
• Topology Statistics:
– Running on YARN as an application, so easily scalable
•Container: Memory: 2700m
– Running with 25 workers (5 executors/worker)
– Supervisor JVM opts:
•-Xms512m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=64m
– Worker JVM opts:
•-Xmx1800m -Xms1800m
– Processing nearly 100K events per second
Step 1b: Ingesting events from HDFS
Logs
Storm Cluster
(Slider and YARN)
Serving Infra
(Bidders and Ad-servers)
Processing HDFS events in real-time
• Relies on logs streams written to HDFS by scribe
• WAN limitations introduce high compression needs
• DistCp, rather than Kafka
• Using in-house Storm spout to read streams from HDFS
Processing Bid-logs in real-time
Storm Topology Statistics:
• Running on YARN as an application via slider (easily scalable)
–Container: Memory: 2700m
• Currently running with 350 workers (~10 executors/worker).
• Supervisor JVM opts:
–-Xms512m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=64m
• Worker JVM opts:
–-Xmx1800m -Xms1800m
• Processing nearly 1.5-2.0 million events per second (~ 100+B
events per day)
HDFS Spout Architecture
• Master-slave architecture
• Spout granularity is at file-level, with record level offset
bookkeeping.
• Use Zookeeper extensively for book-keeping
–Curator and recipes make life lot easier.
• Highly influenced from Kafka Spout
HDFS Spout Architecture
Spout
Leader
Spout Workers
un-assigned locked checkpoint done
offset Offset-lock
HDFS Spout Architecture
• Assignment Manager (AM):
– Elected based on leader election algorithm
– Polls HDFS periodically to identify new files, based on
timestamp and partitioned paths
– Publish files to be processed as work tasks in zookeeper (ZK)
– Manage time and path offsets, for cleaning up done nodes
– Create periodic done-markers on HDFS
HDFS Spout Architecture
• Worker (W):
– Select work-tasks from the available ones in ZK, when done
with current work, with ephemeral node locking
– Perform file checkpointing using record-offset in ZK to save
work
– Create done node in ZK, after processing the file
HDFS Spout Architecture
Bookkeeping node hierarchy:
• Pluggable Backend: Current implementation use ZK
• Work Life Cycle
– unassigned - file added here by AM
– locked - created by worker on selecting work
– checkpoint - timely checkpointing here
– processed - created by worker on completion
• Offset Management
– offset - stores path, time offset of HDFS
– offset-lock - ephemeral lock for offset update
HDFS Spout Architecture
• Spout Failures
– Slaves - Work made available again by Master
– Master - One of the slaves become master via leader
election and give away the slave duties
• Spouts Contention for work assignment via ZK ephemeral
nodes
• Leverage partitioned data directories and done-markers
based model in the organization
Comparison with official HDFS spout
Storm-1199
• Use HDFS for book-keeping
• Move or rename source files.
• All slave architecture, all spouts
contend for failed works
• No leverage for partitioned data
• Kerberos support
In-house Implementation
● Uses ZK for book-keeping.
● No changes to source files
● Master-Slave architecture with
leader election
● Leverage partitioned data, and
done-markers.
● No Kerberos support.
Step 2: Join via HBase
Logs
Storm Cluster
(Slider and YARN)
HBase Cluster
(Slider and YARN)
Donemarkers
HBase for joining streams of data
• Use request-id as key, to join different streams
• Different Column Qualifiers for different event streams
• HBase Cluster configuration
–Running on YARN as service via slider
–Region-servers: 40 instances, with 4G memory each
–Optimized for writes, with large MemStore
–Tuned compactions, to avoid unnecessary merging of files,
as they expire quickly (low retention)
•Date based compactions in HBase 2.0 available.
• Write throughput: 1M+ TPS
Observations from running Storm at scale
• ZeroMQ more stable than Netty in version 0.9.x
– Many Netty Optimizations available in 0.10.x
• Local-shuffle mode helpful for large data volumes
• Need to tune heartbeats interval
– (task|worker|supervisor).heartbeat.frequency.secs
– Pacemaker: Available in 1.0
• Need to tune code sync interval
– Distributed Cache: Available in 1.0
Step 3: Scan joined view and populate OLAP
OLAP
Metrics
Donemarkers
Event
Streams
Start MR Job
OLAP with multi-dimensional data
• Developed Mapreduce backed workflow
– Cron triggered hourly jobs based on donemarkers
– Scan data from HBase using snapshots
– Semantics for hour boundaries
– Event metric reporting
OLAP with multi-dimensional data
• Modular API for processing records
– Pluggable architecture for different use-cases
– OLAP implemented as a first-class use-case
• Use datacube library (Urban Airship) for generating OLAP
data.
– Configurable metric reporting.
OLAP with multi-dimensional data
Datacube for OLAP
• Library was developed at Urban Airship.
• About the API
– Need to define dimensions and rollups for the cube
– IO library for writing measures for cube
– Pluggable Databases: HBase, In-memory Map
– ID Service: Optimization for encoding values via ID substitution
– Support for bulk-loading and backfilling
OLAP with multi-dimensional data
New features (forked)
• Reverse lookups for scans
• New InputFormat for MR Jobs
• Prefix hashes (data and lookups) for load distribution.
• Optimized DB performance by using Async HBase library for efficient
reads/writes
MR Job statistics
• Use HBase Snapshots
• MR job runs every hour (Run time: 5-15mins)
• Hour is closed with delays of 30-60 minutes (on average), considering
log rotation and shipping(scribe) latencies.
Step 4: Scan OLAP cube for top feature vectors
OLAP
Metrics
Donemarkers
Start MR Job
Feature
Vectors
OLAP with multi-dimensional data
Serialize OLAP View
• Customizable MapReduce Job scans OLAP data (backed by HBase),
writes to HDFS.
• Different Jobs can use this easily accessible data from HDFS for
processing, and upload computed feedback stats to sources like MySQL
MR Job Statistics
• MR job runs every hour (Runtime: 2-5mins)
DevOps Automation
• Monitoring Service
• Topology submission service
Key Takeaways
• Hadoop ecosystem offers a productive stack for high velocity
real time learning problems
• YARN allows one to easily experiment with and tweak vertical
to horizontal scalability ratios
THANKS!
ANY QUESTIONS?
Reach us at
ichhabra@rocketfuel.com
naggarwal@rocketfuel.com

More Related Content

What's hot (20)

PDF
A Big Data Lake Based on Spark for BBVA Bank-(Oscar Mendez, STRATIO)
Spark Summit
 
PPTX
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
DataWorks Summit/Hadoop Summit
 
PDF
Big Data Day LA 2016/ Use Case Driven track - Hydrator: Open Source, Code-Fre...
Data Con LA
 
PPTX
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
DataWorks Summit/Hadoop Summit
 
PPTX
From Insights to Value - Building a Modern Logical Data Lake to Drive User Ad...
DataWorks Summit
 
PPTX
Solr + Hadoop: Interactive Search for Hadoop
gregchanan
 
PPTX
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
DataWorks Summit
 
PDF
High-Scale Entity Resolution in Hadoop
DataWorks Summit/Hadoop Summit
 
PPTX
Securing Spark Applications
DataWorks Summit/Hadoop Summit
 
PDF
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder Hortonworks
Data Con LA
 
PPTX
Streaming in the Wild with Apache Flink
DataWorks Summit/Hadoop Summit
 
PPTX
Apache Hadoop YARN: Present and Future
DataWorks Summit
 
PPTX
Spark Technology Center IBM
DataWorks Summit/Hadoop Summit
 
PPTX
Built-In Security for the Cloud
DataWorks Summit
 
PPTX
What's new in apache hive
DataWorks Summit
 
PPTX
LEGO: Data Driven Growth Hacking Powered by Big Data
DataWorks Summit/Hadoop Summit
 
PDF
Spark Uber Development Kit
DataWorks Summit/Hadoop Summit
 
PDF
Large-Scale Stream Processing in the Hadoop Ecosystem
DataWorks Summit/Hadoop Summit
 
PPTX
Managing Hadoop, HBase and Storm Clusters at Yahoo Scale
DataWorks Summit/Hadoop Summit
 
PPTX
Securing Data in Hadoop at Uber
DataWorks Summit
 
A Big Data Lake Based on Spark for BBVA Bank-(Oscar Mendez, STRATIO)
Spark Summit
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
DataWorks Summit/Hadoop Summit
 
Big Data Day LA 2016/ Use Case Driven track - Hydrator: Open Source, Code-Fre...
Data Con LA
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
DataWorks Summit/Hadoop Summit
 
From Insights to Value - Building a Modern Logical Data Lake to Drive User Ad...
DataWorks Summit
 
Solr + Hadoop: Interactive Search for Hadoop
gregchanan
 
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
DataWorks Summit
 
High-Scale Entity Resolution in Hadoop
DataWorks Summit/Hadoop Summit
 
Securing Spark Applications
DataWorks Summit/Hadoop Summit
 
The Future of Hadoop by Arun Murthy, PMC Apache Hadoop & Cofounder Hortonworks
Data Con LA
 
Streaming in the Wild with Apache Flink
DataWorks Summit/Hadoop Summit
 
Apache Hadoop YARN: Present and Future
DataWorks Summit
 
Spark Technology Center IBM
DataWorks Summit/Hadoop Summit
 
Built-In Security for the Cloud
DataWorks Summit
 
What's new in apache hive
DataWorks Summit
 
LEGO: Data Driven Growth Hacking Powered by Big Data
DataWorks Summit/Hadoop Summit
 
Spark Uber Development Kit
DataWorks Summit/Hadoop Summit
 
Large-Scale Stream Processing in the Hadoop Ecosystem
DataWorks Summit/Hadoop Summit
 
Managing Hadoop, HBase and Storm Clusters at Yahoo Scale
DataWorks Summit/Hadoop Summit
 
Securing Data in Hadoop at Uber
DataWorks Summit
 

Viewers also liked (20)

PPTX
Architecting a Fraud Detection Application with Hadoop
DataWorks Summit
 
PPTX
Fraud Detection Architecture
Gwen (Chen) Shapira
 
PPTX
Big Data Application Architectures - Fraud Detection
DataWorks Summit/Hadoop Summit
 
PPTX
Apache Storm 0.9 basic training - Verisign
Michael Noll
 
PDF
Lambda Architecture 2.0 Convergence between Real-Time Analytics, Context-awar...
Sabri Skhiri
 
PDF
Architecting applications with Hadoop - Fraud Detection
hadooparchbook
 
PDF
Building a Real-Time Fraud Prevention Engine Using Open Source (Big Data) Sof...
Spark Summit
 
PDF
Realtime Analytics with Storm and Hadoop
DataWorks Summit
 
PDF
Automating OpenStack clouds and beyond w/ StackStorm
OpenStack_Online
 
PDF
Introduction to Apache Storm - Concept & Example
Dung Ngua
 
PPTX
Storm – Streaming Data Analytics at Scale - StampedeCon 2014
StampedeCon
 
PPTX
Jaws - Data Warehouse with Spark SQL by Ema Orhian
Spark Summit
 
PPTX
Storm-on-YARN: Convergence of Low-Latency and Big-Data
DataWorks Summit
 
PPTX
CES - C Space Storytelling Session - Programmatic TV Advertising
Rocket Fuel Inc.
 
PPTX
Hado“OPS” or Had “oops”
Rocket Fuel Inc.
 
PPTX
Traffic Quality Webinar
Rocket Fuel Inc.
 
PPTX
How did you know this Ad will be relevant for me?!
Rocket Fuel Inc.
 
PDF
Rocket fuel cross device and ptv 12-9-15 sharedv2
Rocket Fuel Inc.
 
PPTX
Guide to Programmatic Marketing Webinar Deck
Rocket Fuel Inc.
 
PPTX
Lambda Architecture: The Best Way to Build Scalable and Reliable Applications!
Tugdual Grall
 
Architecting a Fraud Detection Application with Hadoop
DataWorks Summit
 
Fraud Detection Architecture
Gwen (Chen) Shapira
 
Big Data Application Architectures - Fraud Detection
DataWorks Summit/Hadoop Summit
 
Apache Storm 0.9 basic training - Verisign
Michael Noll
 
Lambda Architecture 2.0 Convergence between Real-Time Analytics, Context-awar...
Sabri Skhiri
 
Architecting applications with Hadoop - Fraud Detection
hadooparchbook
 
Building a Real-Time Fraud Prevention Engine Using Open Source (Big Data) Sof...
Spark Summit
 
Realtime Analytics with Storm and Hadoop
DataWorks Summit
 
Automating OpenStack clouds and beyond w/ StackStorm
OpenStack_Online
 
Introduction to Apache Storm - Concept & Example
Dung Ngua
 
Storm – Streaming Data Analytics at Scale - StampedeCon 2014
StampedeCon
 
Jaws - Data Warehouse with Spark SQL by Ema Orhian
Spark Summit
 
Storm-on-YARN: Convergence of Low-Latency and Big-Data
DataWorks Summit
 
CES - C Space Storytelling Session - Programmatic TV Advertising
Rocket Fuel Inc.
 
Hado“OPS” or Had “oops”
Rocket Fuel Inc.
 
Traffic Quality Webinar
Rocket Fuel Inc.
 
How did you know this Ad will be relevant for me?!
Rocket Fuel Inc.
 
Rocket fuel cross device and ptv 12-9-15 sharedv2
Rocket Fuel Inc.
 
Guide to Programmatic Marketing Webinar Deck
Rocket Fuel Inc.
 
Lambda Architecture: The Best Way to Build Scalable and Reliable Applications!
Tugdual Grall
 
Ad

Similar to Real time fraud detection at 1+M scale on hadoop stack (20)

PPT
Building Scalable Big Data Infrastructure Using Open Source Software Presenta...
ssuserd3a367
 
PPTX
Hadoop ppt on the basics and architecture
saipriyacoool
 
PPTX
Hadoop ppt1
chariorienit
 
PPTX
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
 
PDF
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
gluent.
 
PPTX
Stream processing on mobile networks
pbelko82
 
PDF
Large-scale Web Apps @ Pinterest
HBaseCon
 
PPTX
Data Analysis on AWS
Paolo latella
 
PPTX
Introduction to Kudu - StampedeCon 2016
StampedeCon
 
PDF
impalapresentation-130130105033-phpapp02 (1)_221220_235919.pdf
ssusere05ec21
 
PPTX
Stinger.Next by Alan Gates of Hortonworks
Data Con LA
 
PDF
Large-Scale Stream Processing in the Hadoop Ecosystem - Hadoop Summit 2016
Gyula Fóra
 
PPTX
xPatterns ... beyond Hadoop (Spark, Shark, Mesos, Tachyon)
Claudiu Barbura
 
PPTX
Hadoop: Components and Key Ideas, -part1
Sandeep Kunkunuru
 
PPT
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
James Chen
 
PDF
Introduction to Impala
markgrover
 
PDF
Hadoop User Group - Status Apache Drill
MapR Technologies
 
PPTX
Hadoop and friends
Chandan Rajah
 
PPTX
Intro to Apache Spark by CTO of Twingo
MapR Technologies
 
PPTX
Using Kafka and Kudu for fast, low-latency SQL analytics on streaming data
Mike Percy
 
Building Scalable Big Data Infrastructure Using Open Source Software Presenta...
ssuserd3a367
 
Hadoop ppt on the basics and architecture
saipriyacoool
 
Hadoop ppt1
chariorienit
 
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
 
Speed Up Your Queries with Hive LLAP Engine on Hadoop or in the Cloud
gluent.
 
Stream processing on mobile networks
pbelko82
 
Large-scale Web Apps @ Pinterest
HBaseCon
 
Data Analysis on AWS
Paolo latella
 
Introduction to Kudu - StampedeCon 2016
StampedeCon
 
impalapresentation-130130105033-phpapp02 (1)_221220_235919.pdf
ssusere05ec21
 
Stinger.Next by Alan Gates of Hortonworks
Data Con LA
 
Large-Scale Stream Processing in the Hadoop Ecosystem - Hadoop Summit 2016
Gyula Fóra
 
xPatterns ... beyond Hadoop (Spark, Shark, Mesos, Tachyon)
Claudiu Barbura
 
Hadoop: Components and Key Ideas, -part1
Sandeep Kunkunuru
 
Etu Solution Day 2014 Track-D: 掌握Impala和Spark
James Chen
 
Introduction to Impala
markgrover
 
Hadoop User Group - Status Apache Drill
MapR Technologies
 
Hadoop and friends
Chandan Rajah
 
Intro to Apache Spark by CTO of Twingo
MapR Technologies
 
Using Kafka and Kudu for fast, low-latency SQL analytics on streaming data
Mike Percy
 
Ad

More from DataWorks Summit/Hadoop Summit (20)

PPT
Running Apache Spark & Apache Zeppelin in Production
DataWorks Summit/Hadoop Summit
 
PPT
State of Security: Apache Spark & Apache Zeppelin
DataWorks Summit/Hadoop Summit
 
PDF
Unleashing the Power of Apache Atlas with Apache Ranger
DataWorks Summit/Hadoop Summit
 
PDF
Enabling Digital Diagnostics with a Data Science Platform
DataWorks Summit/Hadoop Summit
 
PDF
Revolutionize Text Mining with Spark and Zeppelin
DataWorks Summit/Hadoop Summit
 
PDF
Double Your Hadoop Performance with Hortonworks SmartSense
DataWorks Summit/Hadoop Summit
 
PDF
Hadoop Crash Course
DataWorks Summit/Hadoop Summit
 
PDF
Data Science Crash Course
DataWorks Summit/Hadoop Summit
 
PDF
Apache Spark Crash Course
DataWorks Summit/Hadoop Summit
 
PDF
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
PPTX
Schema Registry - Set you Data Free
DataWorks Summit/Hadoop Summit
 
PPTX
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
DataWorks Summit/Hadoop Summit
 
PDF
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
DataWorks Summit/Hadoop Summit
 
PPTX
Mool - Automated Log Analysis using Data Science and ML
DataWorks Summit/Hadoop Summit
 
PPTX
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
PPTX
HBase in Practice
DataWorks Summit/Hadoop Summit
 
PPTX
The Challenge of Driving Business Value from the Analytics of Things (AOT)
DataWorks Summit/Hadoop Summit
 
PDF
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 
PPTX
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
DataWorks Summit/Hadoop Summit
 
PPTX
Backup and Disaster Recovery in Hadoop
DataWorks Summit/Hadoop Summit
 
Running Apache Spark & Apache Zeppelin in Production
DataWorks Summit/Hadoop Summit
 
State of Security: Apache Spark & Apache Zeppelin
DataWorks Summit/Hadoop Summit
 
Unleashing the Power of Apache Atlas with Apache Ranger
DataWorks Summit/Hadoop Summit
 
Enabling Digital Diagnostics with a Data Science Platform
DataWorks Summit/Hadoop Summit
 
Revolutionize Text Mining with Spark and Zeppelin
DataWorks Summit/Hadoop Summit
 
Double Your Hadoop Performance with Hortonworks SmartSense
DataWorks Summit/Hadoop Summit
 
Hadoop Crash Course
DataWorks Summit/Hadoop Summit
 
Data Science Crash Course
DataWorks Summit/Hadoop Summit
 
Apache Spark Crash Course
DataWorks Summit/Hadoop Summit
 
Dataflow with Apache NiFi
DataWorks Summit/Hadoop Summit
 
Schema Registry - Set you Data Free
DataWorks Summit/Hadoop Summit
 
Building a Large-Scale, Adaptive Recommendation Engine with Apache Flink and ...
DataWorks Summit/Hadoop Summit
 
Real-Time Anomaly Detection using LSTM Auto-Encoders with Deep Learning4J on ...
DataWorks Summit/Hadoop Summit
 
Mool - Automated Log Analysis using Data Science and ML
DataWorks Summit/Hadoop Summit
 
How Hadoop Makes the Natixis Pack More Efficient
DataWorks Summit/Hadoop Summit
 
HBase in Practice
DataWorks Summit/Hadoop Summit
 
The Challenge of Driving Business Value from the Analytics of Things (AOT)
DataWorks Summit/Hadoop Summit
 
Breaking the 1 Million OPS/SEC Barrier in HOPS Hadoop
DataWorks Summit/Hadoop Summit
 
From Regulatory Process Verification to Predictive Maintenance and Beyond wit...
DataWorks Summit/Hadoop Summit
 
Backup and Disaster Recovery in Hadoop
DataWorks Summit/Hadoop Summit
 

Recently uploaded (20)

PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Biography of Daniel Podor.pdf
Daniel Podor
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 

Real time fraud detection at 1+M scale on hadoop stack

  • 1. Real time fraud detection at 1+M scale on hadoop stack Ishan Chhabra Nitin Aggarwal Rocketfuel Inc
  • 2. Agenda • Rocketfuel & Advertising auction process • Various kinds of frauds • Problem statement • Helios: Architecture • Implementation in Hadoop Ecosystem • Details about HDFS spout and datacube • Key takeaways
  • 3. Rocketfuel Inc • AdTech firm that enables marketers using AI & Big Data • Scores 120+ Billion Ad auctions in a day • Handles 1-2 Million TPS at peak traffic
  • 5. Exchange - Rocketfuel discrepancy (4b) Notification(5) Record impression count(4b) != count(5)
  • 6. Rocketfuel - Advertiser discrepancy (5) Record impression count(5) != count(6)
  • 7. Common causes • Fraud – Bot networks and malware – Hidden ad slots • Human error – AD JavaScript site or browser specific issues – Bugs in Ad JavaScript – 3rd-party JavaScript interactions in Ad or site
  • 8. Need for real time • Micro-patterns that change frequently • Latency has big business impact; delays in reacting leads to loss of money • A lot of times discrepancies arise due to breakages and sudden unexpected changes
  • 9. Goal: Significantly reduce money loss from both ends by reacting to these micropatterns in near real time
  • 11. Data flow Bids & Notifications (batched and delayed) Impressions (near real time) Bidding SiteAnalytics Site
  • 12. Problem statement • 3 streams with various delays (2 from HDFS, 1 from Kafka) • Join and aggregate • Filter among 2^n feature combinations to identify the top culprits (OLAP cube) • Feedback into bidding
  • 13. Lambda architecture Logs Storm & HBase on YARN (Slider) Serving Infra (Bidders and Ad- servers) Near real-time pipeline Batch pipeline
  • 14. Helios: Abstraction for real time learning • Real time processing of data streams from sources like Kafka and HDFS, with efficient join • Process joined event views to generate different analytics, using HBase and MapReduce • OLAP support • Join with dimensional data; different use-cases
  • 15. Logs Storm Cluster (Slider and YARN) HBase Cluster (Slider and YARN) Serving Infra (Bidders and Ad-servers) Helios architecture OLAP Metrics
  • 16. Step 1a: Ingesting events from Kafka Logs Storm Cluster (Slider and YARN) Serving Infra (Bidders and Ad-servers)
  • 17. Processing Kafka events in real-time • Relies on logs streams written to Kafka by scribe • Kafka Topic with 200+ partitions • Data produced and written via scribe from more than 3K nodes • Using upstream Kafka spout to read data – Spout granularity is at record-level – Uses Zookeeper extensively for book-keeping
  • 18. Processing Kafka events in real-time • Topology Statistics: – Running on YARN as an application, so easily scalable •Container: Memory: 2700m – Running with 25 workers (5 executors/worker) – Supervisor JVM opts: •-Xms512m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=64m – Worker JVM opts: •-Xmx1800m -Xms1800m – Processing nearly 100K events per second
  • 19. Step 1b: Ingesting events from HDFS Logs Storm Cluster (Slider and YARN) Serving Infra (Bidders and Ad-servers)
  • 20. Processing HDFS events in real-time • Relies on logs streams written to HDFS by scribe • WAN limitations introduce high compression needs • DistCp, rather than Kafka • Using in-house Storm spout to read streams from HDFS
  • 21. Processing Bid-logs in real-time Storm Topology Statistics: • Running on YARN as an application via slider (easily scalable) –Container: Memory: 2700m • Currently running with 350 workers (~10 executors/worker). • Supervisor JVM opts: –-Xms512m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=64m • Worker JVM opts: –-Xmx1800m -Xms1800m • Processing nearly 1.5-2.0 million events per second (~ 100+B events per day)
  • 22. HDFS Spout Architecture • Master-slave architecture • Spout granularity is at file-level, with record level offset bookkeeping. • Use Zookeeper extensively for book-keeping –Curator and recipes make life lot easier. • Highly influenced from Kafka Spout
  • 23. HDFS Spout Architecture Spout Leader Spout Workers un-assigned locked checkpoint done offset Offset-lock
  • 24. HDFS Spout Architecture • Assignment Manager (AM): – Elected based on leader election algorithm – Polls HDFS periodically to identify new files, based on timestamp and partitioned paths – Publish files to be processed as work tasks in zookeeper (ZK) – Manage time and path offsets, for cleaning up done nodes – Create periodic done-markers on HDFS
  • 25. HDFS Spout Architecture • Worker (W): – Select work-tasks from the available ones in ZK, when done with current work, with ephemeral node locking – Perform file checkpointing using record-offset in ZK to save work – Create done node in ZK, after processing the file
  • 26. HDFS Spout Architecture Bookkeeping node hierarchy: • Pluggable Backend: Current implementation use ZK • Work Life Cycle – unassigned - file added here by AM – locked - created by worker on selecting work – checkpoint - timely checkpointing here – processed - created by worker on completion • Offset Management – offset - stores path, time offset of HDFS – offset-lock - ephemeral lock for offset update
  • 27. HDFS Spout Architecture • Spout Failures – Slaves - Work made available again by Master – Master - One of the slaves become master via leader election and give away the slave duties • Spouts Contention for work assignment via ZK ephemeral nodes • Leverage partitioned data directories and done-markers based model in the organization
  • 28. Comparison with official HDFS spout Storm-1199 • Use HDFS for book-keeping • Move or rename source files. • All slave architecture, all spouts contend for failed works • No leverage for partitioned data • Kerberos support In-house Implementation ● Uses ZK for book-keeping. ● No changes to source files ● Master-Slave architecture with leader election ● Leverage partitioned data, and done-markers. ● No Kerberos support.
  • 29. Step 2: Join via HBase Logs Storm Cluster (Slider and YARN) HBase Cluster (Slider and YARN) Donemarkers
  • 30. HBase for joining streams of data • Use request-id as key, to join different streams • Different Column Qualifiers for different event streams • HBase Cluster configuration –Running on YARN as service via slider –Region-servers: 40 instances, with 4G memory each –Optimized for writes, with large MemStore –Tuned compactions, to avoid unnecessary merging of files, as they expire quickly (low retention) •Date based compactions in HBase 2.0 available. • Write throughput: 1M+ TPS
  • 31. Observations from running Storm at scale • ZeroMQ more stable than Netty in version 0.9.x – Many Netty Optimizations available in 0.10.x • Local-shuffle mode helpful for large data volumes • Need to tune heartbeats interval – (task|worker|supervisor).heartbeat.frequency.secs – Pacemaker: Available in 1.0 • Need to tune code sync interval – Distributed Cache: Available in 1.0
  • 32. Step 3: Scan joined view and populate OLAP OLAP Metrics Donemarkers Event Streams Start MR Job
  • 33. OLAP with multi-dimensional data • Developed Mapreduce backed workflow – Cron triggered hourly jobs based on donemarkers – Scan data from HBase using snapshots – Semantics for hour boundaries – Event metric reporting
  • 34. OLAP with multi-dimensional data • Modular API for processing records – Pluggable architecture for different use-cases – OLAP implemented as a first-class use-case • Use datacube library (Urban Airship) for generating OLAP data. – Configurable metric reporting.
  • 35. OLAP with multi-dimensional data Datacube for OLAP • Library was developed at Urban Airship. • About the API – Need to define dimensions and rollups for the cube – IO library for writing measures for cube – Pluggable Databases: HBase, In-memory Map – ID Service: Optimization for encoding values via ID substitution – Support for bulk-loading and backfilling
  • 36. OLAP with multi-dimensional data New features (forked) • Reverse lookups for scans • New InputFormat for MR Jobs • Prefix hashes (data and lookups) for load distribution. • Optimized DB performance by using Async HBase library for efficient reads/writes MR Job statistics • Use HBase Snapshots • MR job runs every hour (Run time: 5-15mins) • Hour is closed with delays of 30-60 minutes (on average), considering log rotation and shipping(scribe) latencies.
  • 37. Step 4: Scan OLAP cube for top feature vectors OLAP Metrics Donemarkers Start MR Job Feature Vectors
  • 38. OLAP with multi-dimensional data Serialize OLAP View • Customizable MapReduce Job scans OLAP data (backed by HBase), writes to HDFS. • Different Jobs can use this easily accessible data from HDFS for processing, and upload computed feedback stats to sources like MySQL MR Job Statistics • MR job runs every hour (Runtime: 2-5mins)
  • 39. DevOps Automation • Monitoring Service • Topology submission service
  • 40. Key Takeaways • Hadoop ecosystem offers a productive stack for high velocity real time learning problems • YARN allows one to easily experiment with and tweak vertical to horizontal scalability ratios