SlideShare a Scribd company logo
7/16/2014
1
Map Reduce Introduction
Simon Shim
The problems…
• Performing database operations in parallel is
complicated, and scalability is challenging to
implement correctly
• How do we store 1,000x as much data?
• How do we process data where we don’t
know the schema in advance?
What does it mean to be reliable?
Ken Arnold, CORBA designer*:
“Failure is the defining difference between distributed and local
programming”
*(Serious Über-hacker)
What we need
• An efficient way to decompose problems into
parallel parts
• A way to read and write data in parallel
• A way to minimize bandwidth usage
• A reliable way to get computation done
7/16/2014
2
A Radical Way Out…
• Nodes talk to each other as little as possible –
maybe never
– “Shared nothing” architecture
• Programmer should not explicitly be allowed
to communicate between nodes
• Data is spread throughout machines in
advance, computation happens where it’s
stored.
Locality
• Master program divides up tasks based on
location of data: tries to have map tasks on
same machine as physical file data, or at least
same rack
• Map task inputs are divided into 64—128 MB
blocks: same size as filesystem chunks
– Process components of a single file in parallel
Fault Tolerance
• Tasks designed for independence
• Master detects worker failures
• Master re-executes tasks that fail while in
progress
• Restarting one task does not require
communication with other tasks
• Data is replicated to increase availability,
durability
How MapReduce is Structured
• Functional programming meets distributed
computing
• A batch data processing system
• Factors out many reliability concerns from
application logic
7/16/2014
3
Functional Programming Review
• Functional operations do not modify data
structures: They always create new ones
• Original data still exists in unmodified form
• Data flows are implicit in program design
• Order of operations does not matter
Functional Programming Review
fun foo(l: int list) =
sum(l) + mul(l) + length(l)
Order of sum() and mul(), etc does not matter
– they do not modify l
“Updates” Don’t Modify Structures
fun append(x, lst) =
let lst' = reverse lst in
reverse ( x :: lst' )
The append() function above reverses a list, adds a new
element to the front, and returns all of that, reversed,
which appends an item.
But it never modifies lst!
Functions Can Be Used As Arguments
fun DoDouble(f, x) = f (f x)
It does not matter what f does to its
argument; DoDouble() will do it twice.
7/16/2014
4
Map
Creates a new list by applying f to each element of the
input list; returns output in order.
f
f
f
f
f
f
Fold
Moves across a list, applying f to each element plus an
accumulator. f returns the next accumulator value,
which is combined with the next element of the list
f f f f f returned
initial
Example
fun foo(l: int list) =
sum(l) + mul(l) + length(l)
How can we implement this?
Example (Solved)
fun foo(l: int list) =
sum(l) + mul(l) + length(l)
fun sum(lst) = foldl (fn (x,a)=>x+a)
fun mul(lst) = foldl (fn (x,a)=>x*a)
fun length(lst) = foldl (fn (x,a)=>1+a)
7/16/2014
5
Implicit Parallelism In map
• In a purely functional setting, elements of a list being
computed by map cannot see the effects of the
computations on other elements
• If order of application of f to elements in list is
commutative, we can reorder or parallelize execution
• This is the “secret” that MapReduce exploits
Motivation: Large Scale Data
Processing
• Want to process lots of data ( > 1 TB)
• Want to parallelize across
hundreds/thousands of CPUs
• … Want to make this easy
MapReduce
• Automatic parallelization & distribution
• Fault-tolerant
• Provides status and monitoring tools
• Clean abstraction for programmers
Programming Model
• Borrows from functional programming
• Users implement interface of two functions:
– map (in_key, in_value) ->
(out_key, intermediate_value) list
– reduce (out_key, intermediate_value list) ->
out_value list
7/16/2014
6
map (in_key, in_value) ->
(out_key, intermediate_value) list
map reduce
• After the map phase is over, all the
intermediate values for a given output key are
combined together into a list
• reduce() combines those intermediate values
into one or more final values for that same
output key
• (in practice, usually only one final value per
key)
reduce
reduce (intermediate_key, int_value list) ->
(out_key, out_value) list
returned
initial
Example: Filter Mapper
let map(k, v) =
if (isPrime(v)) then emit(k, v)
(“foo”, 7)  (“foo”, 7)
(“test”, 10)  (nothing)
7/16/2014
7
Example: Sum Reducer
let reduce(k, vals) =
sum = 0
foreach int v in vals:
sum += v
emit(k, sum)
(“A”, [42, 100, 312])  (“A”, 454)
(“B”, [12, 6, -2])  (“B”, 16)
Data store 1 Data store n
map
(key 1,
values...)
(key 2,
values...)
(key 3,
values...)
map
(key 1,
values...)
(key 2,
values...)
(key 3,
values...)
Input key*value
pairs
Input key*value
pairs
== Barrier == : Aggregates intermediate values by output key
reduce reduce reduce
key 1,
intermediate
values
key 2,
intermediate
values
key 3,
intermediate
values
final key 1
values
final key 2
values
final key 3
values
...
Parallelism
• map() functions run in parallel, creating
different intermediate values from different
input data sets
• reduce() functions also run in parallel, each
working on a different output key
• All values are processed independently
• Bottleneck: reduce phase can’t start until map
phase is completely finished.
Example: Count word occurrences
map(String input_key, String input_value):
// input_key: document name
// input_value: document contents
for each word w in input_value:
EmitIntermediate(w, "1");
reduce(String output_key, Iterator
intermediate_values):
// output_key: a word
// output_values: a list of counts
int result = 0;
for each v in intermediate_values:
result += ParseInt(v);
Emit(AsString(result));
7/16/2014
8
Example vs. Actual Source Code
• Example is written in pseudo-code
• Actual implementation is in C++, using a
MapReduce library
• Bindings for Python and Java exist via
interfaces
• True code is somewhat more involved (defines
how the input key/values are divided up and
accessed, etc.)
Locality
• Master program divvies up tasks based on
location of data: tries to have map() tasks on
same machine as physical file data, or at least
same rack
• map() task inputs are divided into 64 MB
blocks: same size as Google File System
chunks
Fault Tolerance
• Master detects worker failures
– Re-executes completed & in-progress map() tasks
– Re-executes in-progress reduce() tasks
• Master notices particular input key/values
cause crashes in map(), and skips those values
on re-execution.
– Effect: Can work around bugs in third-party
libraries!
Optimizations
• No reduce can start until map is complete:
– A single slow disk controller can rate-limit the
whole process
• Master redundantly executes “slow-moving”
map tasks; uses results of first copy to finish
7/16/2014
9
Combining Phase
• Run on mapper nodes after map phase
• “Mini-reduce,” only on local map output
• Used to save bandwidth before sending data
to full reducer
• Reducer can be combiner if commutative &
associative
Combiner, graphically
Combiner
replaces with:
Map output
To reducer
On one mapper machine:
To reducer
Word Count Example redux
map(String input_key, String input_value):
// input_key: document name
// input_value: document contents
for each word w in input_value:
EmitIntermediate(w, 1);
reduce(String output_key, Iterator<int>
intermediate_values):
// output_key: a word
// output_values: a list of counts
int result = 0;
for each v in intermediate_values:
result += v;
Emit(result);
wordcount
public void map(WritableComparable key, Writable value, OutputCollector output,
Reporter reporter) throws IOException {
String line = ((UTF8)value).toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one);}
}
public void reduce(WritableComparable key, Iterator values,
OutputCollector output,Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += ((IntWritable) values.next()).get();
}
output.collect(key, new IntWritable(sum));
}
7/16/2014
10
Distributed “Tail Recursion”
• MapReduce doesn’t make infinite scalability
automatic.
• Is word count infinitely scalable? Why (not)?
What About This?
UniqueValuesReducer(K key, iter<V> values) {
Set<V> seen = new HashSet<V>();
for (V val : values) {
if (!seen.contains(val)) {
seen.put(val);
emit (key, val);
}
}
}
A Scalable Implementation
KeyifyMapper(K key, V val) {
emit ((key, val), 1);
}
IgnoreValuesCombiner(K key, iter<V> values) {
emit (key, 1);
}
UnkeyifyReducer(K key, iter<V> values) {
let (k', v') = key;
emit (k', v');
}
MapReduce Conclusions
• MapReduce has proven to be a useful abstraction
• Greatly simplifies large-scale computations at Google
• Functional programming paradigm can be applied to
large-scale applications
• Fun to use: focus on problem, let library deal w/
messy details
7/16/2014
11
Hadoop Introduction
MapReduce/Hadoop
Google calls it: Hadoop equivalent:
MapReduce Hadoop
GFS HDFS
Bigtable HBase
Chubby Zookeeper
Some MapReduce Terminology
• Job – A “full program” - an execution of a
Mapper and Reducer across a data set
• Task – An execution of a Mapper or a Reducer
on a slice of data
– a.k.a. Task-In-Progress (TIP)
• Task Attempt – A particular instance of an
attempt to execute a task on a machine
MapReduce: High Level
JobTracker
MapReduce job
submitted by
client computer
Master node
TaskTracker
Slave node
Task instance
TaskTracker
Slave node
Task instance
TaskTracker
Slave node
Task instance
7/16/2014
12
Job Distribution
• MapReduce programs are contained in a Java “jar”
file + an XML file containing serialized program
configuration options
• Running a MapReduce job places these files into the
HDFS and notifies TaskTrackers where to retrieve the
relevant program code
• … Where’s the data distribution?
Data Distribution
• Implicit in design of MapReduce!
– All mappers are equivalent; so map whatever data
is local to a particular node in HDFS
• If lots of data does happen to pile up on the
same node, nearby nodes will map instead
– Data transfer is handled implicitly by HDFS
What Happens In MapReduce?
Depth First
Job Launch Process: Client
• Client program creates a JobConf
– Identify classes implementing Mapper and
Reducer interfaces
• JobConf.setMapperClass(), setReducerClass()
– Specify inputs, outputs
• FileInputFormat.addInputPath(),
• FileOutputFormat.setOutputPath()
– Optionally, other options too:
• JobConf.setNumReduceTasks(),
JobConf.setOutputFormat()…
7/16/2014
13
Job Launch Process: JobClient
• Pass JobConf to JobClient.runJob() or
submitJob()
– runJob() blocks, submitJob() does not
• JobClient:
– Determines proper division of input into
InputSplits
– Sends job data to master JobTracker server
Job Launch Process: JobTracker
• JobTracker:
– Inserts jar and JobConf (serialized to XML) in
shared location
– Posts a JobInProgress to its run queue
Job Launch Process: TaskTracker
• TaskTrackers running on slave nodes
periodically query JobTracker for work
• Retrieve job-specific jar and config
• Launch task in separate instance of Java
Job Launch Process: TaskRunner
• TaskRunner, MapTaskRunner, MapRunner
work in a daisy-chain to launch your Mapper
– Task knows ahead of time which InputSplits it
should be mapping
– Calls Mapper once for each record retrieved from
the InputSplit
• Running the Reducer is much the same
7/16/2014
14
Mapper
• void map(K1 key,
V1 value,
OutputCollector<K2, V2> output,
Reporter reporter)
• K types implement WritableComparable
• V types implement Writable
What is Writable?
• Hadoop defines its own classes for strings
(Text), integers (IntWritable), etc.
• All values are instances of Writable
• All keys are instances of WritableComparable
Getting Data To The Mapper
Input file
InputSplit InputSplit InputSplit InputSplit
Input file
RecordReader RecordReader RecordReader RecordReader
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
InputFormat
Data: Stream of keys and values
are 1
Hi 1
how 1
You 1
are 1
Hello 1
Hello 1
how 1
You 1
are 2
Hello 2
Hi 1
how 2
you 2
Input OutputIntermediate results
Map Reduce
<0>Hi how are you
<100>I am good
<0>Hello Hello how are you
<105>Not so good
Sorted
are [1 1]
Hello [1 1]
Hi [1]
how [1 1]
you [1 1]
Merged
7/16/2014
15
wordcount
public void map(WritableComparable key, Writable value, OutputCollector output,
Reporter reporter) throws IOException {
String line = ((UTF8)value).toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one);}
}
public void reduce(WritableComparable key, Iterator values,
OutputCollector output,Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += ((IntWritable) values.next()).get();
}
output.collect(key, new IntWritable(sum));
}
• public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
System.out.println("Started Job");
Job job = new Job(conf, "wordcount");
job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 :1);
Reading Data
• Data sets are specified by InputFormats
– Defines input data (e.g., a directory)
– Identifies partitions of the data that form an
InputSplit
– Factory for RecordReader objects to extract (k, v)
records from the input source
FileInputFormat and Friends
• TextInputFormat – Treats each ‘n’-terminated
line of a file as a value
• KeyValueTextInputFormat – Maps ‘n’-
terminated text lines of “k SEP v”
• SequenceFileInputFormat – Binary file of (k, v)
pairs with some add’l metadata
• SequenceFileAsTextInputFormat – Same, but
maps (k.toString(), v.toString())
7/16/2014
16
Filtering File Inputs
• FileInputFormat will read all files out of a
specified directory and send them to the
mapper
• Delegates filtering this file list to a method
subclasses may override
– e.g., Create your own “xyzFileInputFormat” to
read *.xyz from directory list
Record Readers
• Each InputFormat provides its own
RecordReader implementation
– Provides (unused?) capability multiplexing
• LineRecordReader – Reads a line from a text
file
• KeyValueRecordReader – Used by
KeyValueTextInputFormat
Input Split Size
• FileInputFormat will divide large files into
chunks
– Exact size controlled by mapred.min.split.size
• RecordReaders receive file, offset, and length
of chunk
• Custom InputFormat implementations may
override split size – e.g., “NeverChunkFile”
Sending Data To Reducers
• Map function receives OutputCollector object
– OutputCollector.collect() takes (k, v) elements
• Any (WritableComparable, Writable) can be
used
• By default, mapper output type assumed to
be same as reducer output type
7/16/2014
17
WritableComparator
• Compares WritableComparable data
– Will call WritableComparable.compare()
– Can provide fast path for serialized data
• JobConf.setOutputValueGroupingComparator()
Partition And Shuffle
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Reducer Reducer Reducer
(intermediates) (intermediates) (intermediates)
Partitioner Partitioner Partitioner Partitioner
shuffling
Partitioner
• int getPartition(key, val, numPartitions)
– Outputs the partition number for a given key
– One partition == values sent to one Reduce task
• HashPartitioner used by default
– Uses key.hashCode() to return partition num
• JobConf sets Partitioner implementation
Reduction
• reduce( K2 key,
Iterator<V2> values,
OutputCollector<K3, V3> output,
Reporter reporter)
• Keys & values sent to one partition all go to
the same reduce task
• Calls are sorted by key – “earlier” keys are
reduced and output before “later” keys
7/16/2014
18
Finally: Writing The Output
Reducer Reducer Reducer
RecordWriter RecordWriter RecordWriter
output file output file output file
OutputFormat
OutputFormat
• Analogous to InputFormat
• TextOutputFormat – Writes “key valn” strings
to output file
• SequenceFileOutputFormat – Uses a binary
format to pack (k, v) pairs
• NullOutputFormat – Discards output
Pig Latin
Reference: Programming Pig by Alan Gates
72
Pig
 Too many lines of Hadoop code even for simple
logic
 How many lines do you have for word count?
 High level dataflow language (Pig Latin)
 Much simpler than Java: 10 lines of Pig Latin V.S 200
lines in Java
 Simplifies the data processing
 Framework for analyzing large un-structured and
semi-structured data on top of Hadoop.
– Pig Engine Parses, compiles Pig Latin scripts into MapReduce
jobs run on top of Hadoop.
– Pig Latin is declarative, SQL-like language; the high level
language interface for Hadoop.
7/16/2014
19
73
Pig runs over Hadoop Who uses Pig?
• 70% of production jobs at Yahoo (10ks per day)
• Twitter, LinkedIn, Ebay, AOL,…
• Used to
– Process web logs
– Build user behavior models
– Process images
– Build maps of the web
– Do research on large data sets
Word Count using MapReduce Word Count using Pig
Lines=LOAD ‘input/hadoop.log’ AS (line: chararray);
Words = FOREACH Lines GENERATE FLATTEN(TOKENIZE(line)) AS word;
Groups = GROUP Words BY word;
Counts = FOREACH Groups GENERATE group, COUNT(Words);
Results = ORDER Words BY Counts DESC;
Top5 = LIMIT Results 5;
STORE Top5 INTO /output/top5words;
7/16/2014
20
77
Pig Join
 Join in Pig
– Various algorithms are already available.
– Some of them are generic to support multi-way join
– No need to consider integration into Select-Project-Join-
Aggregate (SPJA) workflow.
A = LOAD 'input/join/A';
B = LOAD 'input/join/B';
C = JOIN A BY $0, B BY $1;
DUMP C;
Join
79
Motivation by Example
 Suppose we have
user data in one file,
website data in
another file.
 We need to find the
top 5 most visited
pages by users aged
18-25
80
In Pig Latin
7/16/2014
21
81
Map Data
 How to map the data to records
– By default, one line → one record
– User can customize the loading process
 How to identify attributes and map them to
the schema
– Delimiter to separate different attributes
– By default, delimiter is tab. Customizable.
Pig Data Types
• Scalar Types:
– Int, long, float, double, boolean, null, chararray, bytearry;
• Complex Types: fields, tuples, bags, relations;
– A Field is a piece of data
– A Tuple is an ordered set of fields
– A Bag is a collection of tuples
– A Relation is a bag
• Samples:
– Tuple  Row in Database
• ( 0002576169, Tome, 20, 4.0)
– Bag  Table or View in Database
{(0002576169 , Tome, 20, 4.0),
(0002576170, Mike, 20, 3.6),
(0002576171 Lucy, 19, 4.0), …. }
Pig Operations
• Loading data
– LOAD loads input data
– Lines=LOAD ‘input/access.log’ AS (line: chararray);
• Projection
– FOREACH … GENERTE … (similar to SELECT)
– takes a set of expressions and applies them to every record.
• Grouping
– GROUP collects together records with the same key
• Dump/Store
– DUMP displays results to screen, STORE save results to file system
• Aggregation
– AVG, COUNT, MAX, MIN, SUM
Pig Operations
• Pig Data Loader
– PigStorage: loads/stores relations using field-delimited
text format
– TextLoader: loads relations from a plain-text format
– BinStorage:loads/stores relations from or to binary
files
– HBaseStorage: loads/stores from HBase
– PigDump: stores relations by writing the toString()
representation of tuples, one per line
students = load 'student.txt' using PigStorage(‘,')
as (studentid: int, name:chararray, age:int, gpa:double);
(John, 18, 4.0F)
(Mary, 19, 3.8F)
(Bill, 20,3.9F)
7/16/2014
22
85
Schema
 User can optionally define the schema of the input data
 Once the schema of the source data is given, the schema
of the intermediate relation will be induced by Pig
 Schema 1
A = LOAD 'input/A' as (name:chararray, age:int);
B = FILTER A BY age == 20;
 Schema 2
A = LOAD 'input/A' as (name:chararray, age:chararray);
B = FILTER A BY age != '20';
86
Date Types
Pig Operations - Foreach
• Foreach ... Generate
– The Foreach … Generate statement iterates over
the members of a bag
– The result of a Foreach is another bag
– Elements are named as in the input bag
studentid = FOREACH students GENERATE studentid, name;
Pig Operations – Positional Reference
• Fields are referred to by positional notation or
by name (alias).
First Field Second Field Third Field
Data Type chararray int float
Position notation $0 $1 $2
Name (variable) name age Gpa
Field value John 18 4.0
students = LOAD 'student.txt' USING PigStorage() AS (name:chararray, age:int, gpa:float);
DUMP A;
(John,18,4.0F)
(Mary,19,3.8F)
(Bill,20,3.9F)
studentname = Foreach students Generate $1 as studentname;
7/16/2014
23
Pig Operations- Group
• Groups the data in one or more relations
– Collect records with the same key into a bag.
daily = load ‘NYSE_daily’ as (exchange, stock);
grpd = group daily by stock;
Store grpd into ‘by_group’;
Pig Operations – Dump&Store
• DUMP Operator:
– display output results, will always trigger
execution
• STORE Operator:
– Pig will parse entire script prior to writing for
efficiency purposes
daily = load ‘NYSE_daily’ as (exchange, stock);
grpd = group daily by stock;
Store grpd into ‘by_group’ using PigStorage(‘,’);
Pig Operations - Count
• Compute the number of elements in a bag
• Use the COUNT function to compute the
number of elements in a bag.
• COUNT requires a preceding GROUP ALL
statement for global counts and GROUP BY
statement for group counts.
X = FOREACH B GENERATE COUNT(A);
Pig Operation - Order
• Sorts a relation based on one or more fields
• In Pig, relations are unordered. If you order
relation A to produce relation X relations A
and X still contain the same elements.
student = ORDER students BY gpa DESC;
7/16/2014
24
93
Operators
 Diagnostic Operators
– Show the status/metadata of the relations
– Used for debugging
– Will not be integrated into execution plan
– DESCRIBE, EXPLAIN, ILLUSTRATE.
94
Functions
 Built-in Functions
– Hard-coded routines offered by Pig.
 User Defined Function (UDF)
– Supports customized functionalities
– Piggy Bank, a warehouse for UDFs
How to run Pig Latin scripts
• Local mode
– Local host and local file system is used
– Neither Hadoop nor HDFS is required
– Useful for prototyping and debugging
• MapReduce mode
– Run on a Hadoop cluster and HDFS
• Batch mode - run a script directly
– Pig –x local my_pig_script.pig
– Pig –x mapreduce my_pig_script.pig
• Interactive mode use the Pig shell to run script
– Grunt> Lines = LOAD ‘/input/input.txt’ AS (line:chararray);
– Grunt> Unique = DISTINCT Lines;
– Grunt> DUMP Unique;
96
Pig Execution Modes
• Local mode
– Launch single JVM
– Access local file system
– No MR job running
• Hadoop mode
– Execute a sequence of MR jobs
– Pig interacts with Hadoop master node
7/16/2014
25
97
CompilationCompilation Hive
• Hive: data warehousing application in Hadoop
– Query language is HQL, variant of SQL
– Tables stored on HDFS as flat files
– Developed by Facebook, now open source
Programming Hive by Edward Capriolo, etc
https://cwiki.apache.org/confluence/display/Hi
ve/Tutorial
HIVE Architecture
7/16/2014 99
HIVE - A warehouse solution over Map
Reduce Framework
Hive Database
Tables
○ Analogous to tables in relational database
○ Each table has a corresponding HDFS dir
○ Data is serialized and stored in files within dir
○ Support external tables on data stored in HDFS, NFS or
local directory.
Partitions
○ @table can have 1 or more partitions (1-level) which
determine the distribution of data within subdirectories
of table directory.
100
7/16/2014
26
HIVE Database cont.
e.g : Table T under /wh/T and is partitioned on column
ds + ctry
For ds=20140101
ctry=US
Then data is stored within dir
/wh/T/ds=20090101/ctry=US
– Buckets
• Data in each partition are divided into buckets based on
hash of a column in the table. Each bucket is stored as a file
in the partition directory.
101
hiveQL
• Support SQL-like query language called HiveQL
for select,join, aggregate, union all and sub-
query in the from clause
• Support DDL stmt such as CREATE table with
serialization format, partitioning and
bucketing columns
• Command to load data from external sources
and INSERT into HIVE tables.
• DO NOT support UPDATE and DELETE
102
Data Model
• Tables
– Typed columns (int, float, string, boolean)
– Also, list: map (for JSON-like data)
• Partitions
– For example, range-partition tables by date
• Buckets
– Hash partitions within ranges (useful for sampling,
join optimization)
Source: cc-licensed slide by Cloudera
Examples – DDL Operations
CREATE TABLE sample (foo INT, bar STRING)
{ PARTITIONED BY (ds STRING) };
DESCRIBE sample;
ALTER TABLE sample ADD COLUMNS (new_col
INT);
DROP TABLE sample;
7/16/2014
27
CREATE TABLE page_view(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(country STRING)
STORED AS SEQUENCEFILE;
Examples – DML Operations
LOAD DATA LOCAL INPATH './sample.txt'
OVERWRITE INTO TABLE sample PARTITION
(ds='2012-02-24');
LOAD DATA INPATH '/user/shim/hive/sample.txt'
OVERWRITE INTO TABLE sample PARTITION
(ds='2012-02-24');
SELECTS and FILTERS
SELECT foo FROM sample WHERE ds='2012-02-24';
INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out'
SELECT * FROM sample WHERE ds='2012-02-24';
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/hive-
sample-out' SELECT * FROM sample;
Aggregations and Groups
SELECT MAX(foo) FROM sample;
SELECT ds, COUNT(*), SUM(foo) FROM sample
GROUP BY ds;
7/16/2014
28
Join
SELECT * FROM customer c JOIN order_cust o ON
(c.id=o.cus_id);
SELECT c.id,c.name,c.address,ce.exp FROM customer c JOIN
(SELECT cus_id,sum(price) AS exp FROM order_cust GROUP BY
cus_id) ce ON (c.id=ce.cus_id);
CREATE TABLE customer (id INT,name STRING,address STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '#';
CREATE TABLE order_cust (id INT,cus_id INT,prod_id INT,price INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY 't';
Hive: Example
• Relational join on two tables:
– Table of word counts from Shakespeare and Bible
collection
SELECT s.word, s.freq, k.freq FROM shakespeare s
JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1
ORDER BY s.freq DESC LIMIT 10;
the 25848 62394
I 23031 8854
and 19671 38985
to 18038 13526
of 16700 34654
a 14170 8057
you 12702 2720
my 11297 4135
Hive: Behind the Scenes
SELECT s.word, s.freq, k.freq FROM shakespeare s
JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1
ORDER BY s.freq DESC LIMIT 10;
(TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (.
(TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (.
(TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE
(AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (.
(TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10)))
(one or more of MapReduce jobs)
(Abstract Syntax Tree)
References:
1. http://pig.apache.org (Pig official site)
2. http://hive.apache.org (Hive official site)
3. Pig Docs: http://pig.apache.org/docs/r0.9.0
4. Hive Docs:
https://cwiki.apache.org/confluence/display/Hive/Home#Home-
UserDocumentation
5. Hive Tutorial:
https://cwiki.apache.org/confluence/display/Hive/Tutorial
6. Pig Papers: http://wiki.apache.org/pig/PigTalksPapers
7. http://en.wikipedia.org/wiki/Pig_Latin
http://en.wikipedia.org/wiki/Apache_Hive
8. Hive – A petabyte scale data warehouse using hadoop
infolab.stanford.edu/~ragho/hive-icde2010.pdf‎
7/16/2014
29
Recommendation Engine
Simon Shim
Partial contents from K. Han’s presentation
Data Mining
• Mining patterns from Data
• Statistics?
• Machine learning?
115
Data Mining in Use
• The US Government uses Data Mining to track fraud
• A Supermarket becomes an information broker
• Basketball teams use it to track game strategy
• Recommend similar items
• Holding on to Good Customers
• Weeding out Bad Customers
Examples of data mining
Frequently bought together Movie recommendation
7/16/2014
30
examples
Heart Monitoring
Genome Mining
Keyword search
Data Mining
• Frequent pattern mining
• Machine learning
– Supervised
– Unsupervised
• Recommendation system
• Graph mining
• Unstructured data
• Big data
• Stream mining
Frequent Pattern Mining
?
Process
7/16/2014
31
Machine Learning
• Supervised
• Unsupervised (Clustering)
Binary Classification
Checking Duration Savings
(years) ($k)
Current
Loans
Loan
Purpose
Risky?
Yes 1 10 Yes TV 0
Yes 2 4 No TV 1
No 5 75 No Car 0
Yes 10 66 No Car 1
Yes 5 83 Yes Car 0
Yes 1 11 No TV 0
Yes 4 99 Yes Car 0
Decision Tree Neural Network
• Perceptron multi-layer NN
7/16/2014
32
SupportVector Machine (SVM) SVM Spam Filter
• Transmission
– IP address --167.12.24.555
– Sender URL -- spam.com
• Email header
– From --“spam@spam.com”
– To --“undisclosed”
• Email Body
– # of paragraphs
– # words
• Email structure
– # of attachments
– # of links
Regression
• Linear regression Non-linear regression
• Application
– Stock price prediction
– Credit scoring
– Employment forecast
Logistic regression
7/16/2014
33
Supervised learning machine learning
Graph Analysis
• Friend Recommendation Drug discovery
Twitter analysis
• TwitterRank: PageRank approach
7/16/2014
34
Facebook Graph Search
• Restaurants liked by my friend
Prediction
• Rating prediction
– Given how an user rated other items, predict the user’s rating for a
given item
• Top-N Recommendation
– Given the list of items liked by an user, recommend new items that the
user might like
Feedback data
• Explicit feedback
– Ratings, reviews
• Implicit feedback
– Purchase behavior: frequency, recency,
– Browsing behavior: # of visits, time of visit, stay
duration, clicks,
Data Analysis Example
• Carl Morris (Harvard statistics
professor) used Markov Chain
• Baseball: discrete game
• There are four states
• State (0, 0, 0, 0) : (out count, reach 1st base, reach
2nd base, reach 3rd base)
• Then there are 3x2x2x2 = 24 states
• Each inning starts as (0,0,0,) and ends as (3,0,0,0)
7/16/2014
35
NERV MoneyBall
• Oakland Athletics
– Lowest team salary in 2002 : 41 million
– Difficult to recruit good players
– Paul DePodesta: assistant to general manager Billy
Beane
• 1999 – 2003: advanced to post season 4 years
in a row
• Baseball management based on statistics and
scientific data
Case Study: Item Based
Recommendation
• Using meta data from item, compute similarity
between items
– Description, price, category
– Normalize these into a feature vector
– N-dimension
• Computer the distance between vectors
– Euclidean distance score
– Cosine similarity score
– Pearson correlation score
7/16/2014
36
Architecture Item Based Recommendation
• Collaborative Filtering
– Leverage users’ collective intelligence
– Similar users tend to like similar items
– Amazon’s product recommendation is a good
example
How to implement Collaborative
Filtering
• Construct co-occurrence matrix (item
similarity matrix)
– Increment S[i,j] and S[j,i] if item I and item j are
liked by the same user
– Repeat this for all users
• For item k, find the most co-occurred items
from matrix as recommendation
Item Based Similarity
Out of
Africa
Star
Wars
Air Force
One
Liar,
Liar
John 4 4 5 1
Adam 1 1 2 5
Laura ? 4 5 2
7/16/2014
37
User Based Recommendation
• First group users into different clusters
– Represent users as feature vectors
– Information about users
• Geo-location, gender, age, …
– Items users liked (rated)
– K-nearest neighbors (KNN) is used
• From each cluster, find representative items
– Graph traversal
– Highest rated items
– Most liked items
User Based Similarity
• User’s movie rating
Out of
Africa
Star
Wars
Air Force
One
Liar,
Liar
John 4 4 5 1
Adam 1 1 2 5
Laura ? 4 5 2
Latent Factors Challenges
• Cold starter
– For new users/items, no information
• Sparse Data
– Item reviews are not there
• Scalability Issue
– Big data means more computation
7/16/2014
38
What is Mahout?
• Open source machine learning library
– Supports mapreduce
• Recommendation/collaborative filtering
• Classification: Supervised Learning
• Clustering: Unsupervised Learning
Personalized Recommendation
• Build co-occurrence (S) matrix for items
• Build a preference vector (P) for user
• Multiply both matrix R = S x P
• Sort the final vector from R
Example
• N items and M users
• Create S (N x N)
• Create P (1 x N) (all P(i) liked by a user)
• S X P
• Sort results by the score
Example
• Co-occurrence matrix (NxN) Preference vector(1xN)
1
X
2
4
…
X
1
2
3
1
X
X
100,000 movies
Sort
5 1 1 x … x
x x x x … x
1 x 1 x … 1
x 4 2 x … x
x x x x … x
… x 1 x … 1
x 1 1 x … x
x x x 4 … x
x 2 5 x … 1
2 1 1 x … x
3 6 x 4 … x
x x 1 x … 1
100,000
movies
Userid [itemid, score), (itemid, score), …
…
7/16/2014
39
Similarity
• Co-occurrence
• Log likelihood
• Location based
• Gender, age
• Cosine similarity
• Euclidean distance
What is key?
• Understand business domain
• Garbage in, garbage out
– Filtering, cleaning
• One size does not fit all
• Start with simple way and improve
• Create automation for experiments and
tweaks
Ad

More Related Content

What's hot (20)

Map Reduce
Map ReduceMap Reduce
Map Reduce
Manuel Correa
 
Pregel: A System For Large Scale Graph Processing
Pregel: A System For Large Scale Graph ProcessingPregel: A System For Large Scale Graph Processing
Pregel: A System For Large Scale Graph Processing
Riyad Parvez
 
Akka stream and Akka CQRS
Akka stream and  Akka CQRSAkka stream and  Akka CQRS
Akka stream and Akka CQRS
Milan Das
 
Introduction to MapReduce
Introduction to MapReduceIntroduction to MapReduce
Introduction to MapReduce
Chicago Hadoop Users Group
 
Heap Management
Heap ManagementHeap Management
Heap Management
Jenny Galino
 
Map Reduce Online
Map Reduce OnlineMap Reduce Online
Map Reduce Online
Hadoop User Group
 
Functions with heap and stack
Functions with heap and stackFunctions with heap and stack
Functions with heap and stack
baabtra.com - No. 1 supplier of quality freshers
 
MapReduce: Simplified Data Processing on Large Clusters
MapReduce: Simplified Data Processing on Large ClustersMapReduce: Simplified Data Processing on Large Clusters
MapReduce: Simplified Data Processing on Large Clusters
Ashraf Uddin
 
MapReduce : Simplified Data Processing on Large Clusters
MapReduce : Simplified Data Processing on Large ClustersMapReduce : Simplified Data Processing on Large Clusters
MapReduce : Simplified Data Processing on Large Clusters
Abolfazl Asudeh
 
Mapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large ClustersMapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large Clusters
Abhishek Singh
 
Lec16-CS110 Computational Engineering
Lec16-CS110 Computational EngineeringLec16-CS110 Computational Engineering
Lec16-CS110 Computational Engineering
Sri Harsha Pamu
 
Lecture 01 variables scripts and operations
Lecture 01   variables scripts and operationsLecture 01   variables scripts and operations
Lecture 01 variables scripts and operations
Smee Kaem Chann
 
Hash map (java platform se 8 )
Hash map (java platform se 8 )Hash map (java platform se 8 )
Hash map (java platform se 8 )
charan kumar
 
Dynamic Memory Allocation
Dynamic Memory AllocationDynamic Memory Allocation
Dynamic Memory Allocation
vaani pathak
 
Introduction to MapReduce and Hadoop
Introduction to MapReduce and HadoopIntroduction to MapReduce and Hadoop
Introduction to MapReduce and Hadoop
Mohamed Elsaka
 
Introduction to parallel and distributed computation with spark
Introduction to parallel and distributed computation with sparkIntroduction to parallel and distributed computation with spark
Introduction to parallel and distributed computation with spark
Angelo Leto
 
Talk on Standard Template Library
Talk on Standard Template LibraryTalk on Standard Template Library
Talk on Standard Template Library
Anirudh Raja
 
COMPILER DESIGN Run-Time Environments
COMPILER DESIGN Run-Time EnvironmentsCOMPILER DESIGN Run-Time Environments
COMPILER DESIGN Run-Time Environments
Jyothishmathi Institute of Technology and Science Karimnagar
 
Reactive cocoa cocoaheadsbe_2014
Reactive cocoa cocoaheadsbe_2014Reactive cocoa cocoaheadsbe_2014
Reactive cocoa cocoaheadsbe_2014
Werner Ramaekers
 
memory
memorymemory
memory
teach4uin
 
Pregel: A System For Large Scale Graph Processing
Pregel: A System For Large Scale Graph ProcessingPregel: A System For Large Scale Graph Processing
Pregel: A System For Large Scale Graph Processing
Riyad Parvez
 
Akka stream and Akka CQRS
Akka stream and  Akka CQRSAkka stream and  Akka CQRS
Akka stream and Akka CQRS
Milan Das
 
MapReduce: Simplified Data Processing on Large Clusters
MapReduce: Simplified Data Processing on Large ClustersMapReduce: Simplified Data Processing on Large Clusters
MapReduce: Simplified Data Processing on Large Clusters
Ashraf Uddin
 
MapReduce : Simplified Data Processing on Large Clusters
MapReduce : Simplified Data Processing on Large ClustersMapReduce : Simplified Data Processing on Large Clusters
MapReduce : Simplified Data Processing on Large Clusters
Abolfazl Asudeh
 
Mapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large ClustersMapreduce - Simplified Data Processing on Large Clusters
Mapreduce - Simplified Data Processing on Large Clusters
Abhishek Singh
 
Lec16-CS110 Computational Engineering
Lec16-CS110 Computational EngineeringLec16-CS110 Computational Engineering
Lec16-CS110 Computational Engineering
Sri Harsha Pamu
 
Lecture 01 variables scripts and operations
Lecture 01   variables scripts and operationsLecture 01   variables scripts and operations
Lecture 01 variables scripts and operations
Smee Kaem Chann
 
Hash map (java platform se 8 )
Hash map (java platform se 8 )Hash map (java platform se 8 )
Hash map (java platform se 8 )
charan kumar
 
Dynamic Memory Allocation
Dynamic Memory AllocationDynamic Memory Allocation
Dynamic Memory Allocation
vaani pathak
 
Introduction to MapReduce and Hadoop
Introduction to MapReduce and HadoopIntroduction to MapReduce and Hadoop
Introduction to MapReduce and Hadoop
Mohamed Elsaka
 
Introduction to parallel and distributed computation with spark
Introduction to parallel and distributed computation with sparkIntroduction to parallel and distributed computation with spark
Introduction to parallel and distributed computation with spark
Angelo Leto
 
Talk on Standard Template Library
Talk on Standard Template LibraryTalk on Standard Template Library
Talk on Standard Template Library
Anirudh Raja
 
Reactive cocoa cocoaheadsbe_2014
Reactive cocoa cocoaheadsbe_2014Reactive cocoa cocoaheadsbe_2014
Reactive cocoa cocoaheadsbe_2014
Werner Ramaekers
 

Similar to Big data shim (20)

This gives a brief detail about big data
This  gives a brief detail about big dataThis  gives a brief detail about big data
This gives a brief detail about big data
chinky1118
 
Mapreduce2008 cacm
Mapreduce2008 cacmMapreduce2008 cacm
Mapreduce2008 cacm
lmphuong06
 
2004 map reduce simplied data processing on large clusters (mapreduce)
2004 map reduce simplied data processing on large clusters (mapreduce)2004 map reduce simplied data processing on large clusters (mapreduce)
2004 map reduce simplied data processing on large clusters (mapreduce)
anh tuan
 
Map reduce
Map reduceMap reduce
Map reduce
Shahbaz Sidhu
 
Lec2 Mapred
Lec2 MapredLec2 Mapred
Lec2 Mapred
mobius.cn
 
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and ImplementationDistributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
tugrulh
 
An Introduction to MapReduce
An Introduction to MapReduce An Introduction to MapReduce
An Introduction to MapReduce
Sina Ebrahimi
 
Lecture 1 mapreduce
Lecture 1  mapreduceLecture 1  mapreduce
Lecture 1 mapreduce
Shubham Bansal
 
Hadoop and Mapreduce for .NET User Group
Hadoop and Mapreduce for .NET User GroupHadoop and Mapreduce for .NET User Group
Hadoop and Mapreduce for .NET User Group
Csaba Toth
 
Map reduce presentation
Map reduce presentationMap reduce presentation
Map reduce presentation
ateeq ateeq
 
MapReduce basics
MapReduce basicsMapReduce basics
MapReduce basics
Harisankar H
 
2 mapreduce-model-principles
2 mapreduce-model-principles2 mapreduce-model-principles
2 mapreduce-model-principles
Genoveva Vargas-Solar
 
Hadoop Map Reduce
Hadoop Map ReduceHadoop Map Reduce
Hadoop Map Reduce
VNIT-ACM Student Chapter
 
MapReduce.pptx
MapReduce.pptxMapReduce.pptx
MapReduce.pptx
AtulYadav218546
 
Large Scale Data Processing & Storage
Large Scale Data Processing & StorageLarge Scale Data Processing & Storage
Large Scale Data Processing & Storage
Ilayaraja P
 
MapReduce Algorithm Design
MapReduce Algorithm DesignMapReduce Algorithm Design
MapReduce Algorithm Design
Gabriela Agustini
 
Map Reduce
Map ReduceMap Reduce
Map Reduce
Sri Prasanna
 
MapReduce
MapReduceMapReduce
MapReduce
ahmedelmorsy89
 
Map reduce and Hadoop on windows
Map reduce and Hadoop on windowsMap reduce and Hadoop on windows
Map reduce and Hadoop on windows
Muhammad Shahid
 
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdfmodule3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
TSANKARARAO
 
This gives a brief detail about big data
This  gives a brief detail about big dataThis  gives a brief detail about big data
This gives a brief detail about big data
chinky1118
 
Mapreduce2008 cacm
Mapreduce2008 cacmMapreduce2008 cacm
Mapreduce2008 cacm
lmphuong06
 
2004 map reduce simplied data processing on large clusters (mapreduce)
2004 map reduce simplied data processing on large clusters (mapreduce)2004 map reduce simplied data processing on large clusters (mapreduce)
2004 map reduce simplied data processing on large clusters (mapreduce)
anh tuan
 
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and ImplementationDistributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
tugrulh
 
An Introduction to MapReduce
An Introduction to MapReduce An Introduction to MapReduce
An Introduction to MapReduce
Sina Ebrahimi
 
Hadoop and Mapreduce for .NET User Group
Hadoop and Mapreduce for .NET User GroupHadoop and Mapreduce for .NET User Group
Hadoop and Mapreduce for .NET User Group
Csaba Toth
 
Map reduce presentation
Map reduce presentationMap reduce presentation
Map reduce presentation
ateeq ateeq
 
Large Scale Data Processing & Storage
Large Scale Data Processing & StorageLarge Scale Data Processing & Storage
Large Scale Data Processing & Storage
Ilayaraja P
 
Map reduce and Hadoop on windows
Map reduce and Hadoop on windowsMap reduce and Hadoop on windows
Map reduce and Hadoop on windows
Muhammad Shahid
 
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdfmodule3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
TSANKARARAO
 
Ad

More from tistrue (12)

Peter week1
Peter week1Peter week1
Peter week1
tistrue
 
Using your key words to introduce yourself
Using your key words to introduce yourselfUsing your key words to introduce yourself
Using your key words to introduce yourself
tistrue
 
Introducing yourself in english
Introducing yourself in englishIntroducing yourself in english
Introducing yourself in english
tistrue
 
Vc lecture
Vc lectureVc lecture
Vc lecture
tistrue
 
Cloud computing shim
Cloud computing shimCloud computing shim
Cloud computing shim
tistrue
 
Tsb
TsbTsb
Tsb
tistrue
 
에어큐브
에어큐브에어큐브
에어큐브
tistrue
 
윌비솔루션
윌비솔루션윌비솔루션
윌비솔루션
tistrue
 
알서포트 Rsupport
알서포트 Rsupport알서포트 Rsupport
알서포트 Rsupport
tistrue
 
한동만 총영사님 창조경제
한동만 총영사님 창조경제한동만 총영사님 창조경제
한동만 총영사님 창조경제
tistrue
 
Nipa소개
Nipa소개Nipa소개
Nipa소개
tistrue
 
Presentation coffee
Presentation   coffeePresentation   coffee
Presentation coffee
tistrue
 
Peter week1
Peter week1Peter week1
Peter week1
tistrue
 
Using your key words to introduce yourself
Using your key words to introduce yourselfUsing your key words to introduce yourself
Using your key words to introduce yourself
tistrue
 
Introducing yourself in english
Introducing yourself in englishIntroducing yourself in english
Introducing yourself in english
tistrue
 
Vc lecture
Vc lectureVc lecture
Vc lecture
tistrue
 
Cloud computing shim
Cloud computing shimCloud computing shim
Cloud computing shim
tistrue
 
에어큐브
에어큐브에어큐브
에어큐브
tistrue
 
윌비솔루션
윌비솔루션윌비솔루션
윌비솔루션
tistrue
 
알서포트 Rsupport
알서포트 Rsupport알서포트 Rsupport
알서포트 Rsupport
tistrue
 
한동만 총영사님 창조경제
한동만 총영사님 창조경제한동만 총영사님 창조경제
한동만 총영사님 창조경제
tistrue
 
Nipa소개
Nipa소개Nipa소개
Nipa소개
tistrue
 
Presentation coffee
Presentation   coffeePresentation   coffee
Presentation coffee
tistrue
 
Ad

Recently uploaded (20)

Amazon-Inc-A-Comprehensive-Analysis.pptx
Amazon-Inc-A-Comprehensive-Analysis.pptxAmazon-Inc-A-Comprehensive-Analysis.pptx
Amazon-Inc-A-Comprehensive-Analysis.pptx
anjaliworkinfo
 
Are you concerned about the safety of your home and family
Are you concerned about the safety of your home and familyAre you concerned about the safety of your home and family
Are you concerned about the safety of your home and family
wasifkhan196986
 
3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf
3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf
3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf
bomay69950
 
Buy GitHub Accounts in 2025 from anywhere of USA
Buy GitHub Accounts in 2025 from anywhere of USABuy GitHub Accounts in 2025 from anywhere of USA
Buy GitHub Accounts in 2025 from anywhere of USA
buyusaaccounts.com
 
Presented By NAVEENA | Digital Marketing
Presented By NAVEENA | Digital MarketingPresented By NAVEENA | Digital Marketing
Presented By NAVEENA | Digital Marketing
bnaveena69
 
Eric Hannelius - A Serial Entrepreneur
Eric  Hannelius  -  A Serial EntrepreneurEric  Hannelius  -  A Serial Entrepreneur
Eric Hannelius - A Serial Entrepreneur
Eric Hannelius
 
Robbie Teehan - A Passionate About Storytelling
Robbie Teehan - A Passionate About StorytellingRobbie Teehan - A Passionate About Storytelling
Robbie Teehan - A Passionate About Storytelling
Robbie Teehan
 
Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....
Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....
Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....
Insolation Energy
 
The FedEx Effect; Innovation that Transformed Global Logistics
The FedEx Effect; Innovation that Transformed Global LogisticsThe FedEx Effect; Innovation that Transformed Global Logistics
The FedEx Effect; Innovation that Transformed Global Logistics
ramavisca
 
Stock Video Market: Trends & Opportunities
Stock Video Market: Trends & OpportunitiesStock Video Market: Trends & Opportunities
Stock Video Market: Trends & Opportunities
chanderdeepseoexpert
 
Cost Structure of Hydrogen Vehicle Manufacturing Plant
Cost Structure of Hydrogen Vehicle Manufacturing PlantCost Structure of Hydrogen Vehicle Manufacturing Plant
Cost Structure of Hydrogen Vehicle Manufacturing Plant
surajimarc0777
 
Connect with Top HR Professionals Using Data InfoMetrix HR Email List
Connect with Top HR Professionals Using Data InfoMetrix HR Email ListConnect with Top HR Professionals Using Data InfoMetrix HR Email List
Connect with Top HR Professionals Using Data InfoMetrix HR Email List
Data InfoMetrix
 
Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...
Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...
Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...
Mohit Bansal GMI
 
How Security Guards Can Enhance Gated Community Safety.pdf
How Security Guards Can Enhance Gated Community Safety.pdfHow Security Guards Can Enhance Gated Community Safety.pdf
How Security Guards Can Enhance Gated Community Safety.pdf
Stateguard Protective Services
 
Glass Communities SSE, All Versions .pdf
Glass Communities SSE, All Versions .pdfGlass Communities SSE, All Versions .pdf
Glass Communities SSE, All Versions .pdf
Brij Consulting, LLC
 
BEONBIT 2025 New.pdf.....................
BEONBIT 2025 New.pdf.....................BEONBIT 2025 New.pdf.....................
BEONBIT 2025 New.pdf.....................
sudhir9132
 
Module I Introduction to Strategic Management .pptx
Module I Introduction to Strategic Management .pptxModule I Introduction to Strategic Management .pptx
Module I Introduction to Strategic Management .pptx
Rani Channamma University, Belagavi
 
Holden Melia - A Seasoned Leader
Holden  Melia  -  A  Seasoned     LeaderHolden  Melia  -  A  Seasoned     Leader
Holden Melia - A Seasoned Leader
Holden Melia
 
Introduction of E-commerce in ICT applications
Introduction of E-commerce in  ICT applicationsIntroduction of E-commerce in  ICT applications
Introduction of E-commerce in ICT applications
hammadakram562
 
Why Flow Switches Are Key to Efficient Water Management.pptx
Why Flow Switches Are Key to Efficient Water Management.pptxWhy Flow Switches Are Key to Efficient Water Management.pptx
Why Flow Switches Are Key to Efficient Water Management.pptx
Grid Controls
 
Amazon-Inc-A-Comprehensive-Analysis.pptx
Amazon-Inc-A-Comprehensive-Analysis.pptxAmazon-Inc-A-Comprehensive-Analysis.pptx
Amazon-Inc-A-Comprehensive-Analysis.pptx
anjaliworkinfo
 
Are you concerned about the safety of your home and family
Are you concerned about the safety of your home and familyAre you concerned about the safety of your home and family
Are you concerned about the safety of your home and family
wasifkhan196986
 
3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf
3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf
3 Best sites to Buy Linkedin Accounts (PVA & Phone Verified).pdf
bomay69950
 
Buy GitHub Accounts in 2025 from anywhere of USA
Buy GitHub Accounts in 2025 from anywhere of USABuy GitHub Accounts in 2025 from anywhere of USA
Buy GitHub Accounts in 2025 from anywhere of USA
buyusaaccounts.com
 
Presented By NAVEENA | Digital Marketing
Presented By NAVEENA | Digital MarketingPresented By NAVEENA | Digital Marketing
Presented By NAVEENA | Digital Marketing
bnaveena69
 
Eric Hannelius - A Serial Entrepreneur
Eric  Hannelius  -  A Serial EntrepreneurEric  Hannelius  -  A Serial Entrepreneur
Eric Hannelius - A Serial Entrepreneur
Eric Hannelius
 
Robbie Teehan - A Passionate About Storytelling
Robbie Teehan - A Passionate About StorytellingRobbie Teehan - A Passionate About Storytelling
Robbie Teehan - A Passionate About Storytelling
Robbie Teehan
 
Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....
Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....
Top Solar Panel Manufacturers in India and Photovoltaic Module Manufacturers....
Insolation Energy
 
The FedEx Effect; Innovation that Transformed Global Logistics
The FedEx Effect; Innovation that Transformed Global LogisticsThe FedEx Effect; Innovation that Transformed Global Logistics
The FedEx Effect; Innovation that Transformed Global Logistics
ramavisca
 
Stock Video Market: Trends & Opportunities
Stock Video Market: Trends & OpportunitiesStock Video Market: Trends & Opportunities
Stock Video Market: Trends & Opportunities
chanderdeepseoexpert
 
Cost Structure of Hydrogen Vehicle Manufacturing Plant
Cost Structure of Hydrogen Vehicle Manufacturing PlantCost Structure of Hydrogen Vehicle Manufacturing Plant
Cost Structure of Hydrogen Vehicle Manufacturing Plant
surajimarc0777
 
Connect with Top HR Professionals Using Data InfoMetrix HR Email List
Connect with Top HR Professionals Using Data InfoMetrix HR Email ListConnect with Top HR Professionals Using Data InfoMetrix HR Email List
Connect with Top HR Professionals Using Data InfoMetrix HR Email List
Data InfoMetrix
 
Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...
Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...
Mohit Bansal and GMI Infra Empower Sports Culture Through KOMPTE Badminton To...
Mohit Bansal GMI
 
How Security Guards Can Enhance Gated Community Safety.pdf
How Security Guards Can Enhance Gated Community Safety.pdfHow Security Guards Can Enhance Gated Community Safety.pdf
How Security Guards Can Enhance Gated Community Safety.pdf
Stateguard Protective Services
 
Glass Communities SSE, All Versions .pdf
Glass Communities SSE, All Versions .pdfGlass Communities SSE, All Versions .pdf
Glass Communities SSE, All Versions .pdf
Brij Consulting, LLC
 
BEONBIT 2025 New.pdf.....................
BEONBIT 2025 New.pdf.....................BEONBIT 2025 New.pdf.....................
BEONBIT 2025 New.pdf.....................
sudhir9132
 
Holden Melia - A Seasoned Leader
Holden  Melia  -  A  Seasoned     LeaderHolden  Melia  -  A  Seasoned     Leader
Holden Melia - A Seasoned Leader
Holden Melia
 
Introduction of E-commerce in ICT applications
Introduction of E-commerce in  ICT applicationsIntroduction of E-commerce in  ICT applications
Introduction of E-commerce in ICT applications
hammadakram562
 
Why Flow Switches Are Key to Efficient Water Management.pptx
Why Flow Switches Are Key to Efficient Water Management.pptxWhy Flow Switches Are Key to Efficient Water Management.pptx
Why Flow Switches Are Key to Efficient Water Management.pptx
Grid Controls
 

Big data shim

  • 1. 7/16/2014 1 Map Reduce Introduction Simon Shim The problems… • Performing database operations in parallel is complicated, and scalability is challenging to implement correctly • How do we store 1,000x as much data? • How do we process data where we don’t know the schema in advance? What does it mean to be reliable? Ken Arnold, CORBA designer*: “Failure is the defining difference between distributed and local programming” *(Serious Über-hacker) What we need • An efficient way to decompose problems into parallel parts • A way to read and write data in parallel • A way to minimize bandwidth usage • A reliable way to get computation done
  • 2. 7/16/2014 2 A Radical Way Out… • Nodes talk to each other as little as possible – maybe never – “Shared nothing” architecture • Programmer should not explicitly be allowed to communicate between nodes • Data is spread throughout machines in advance, computation happens where it’s stored. Locality • Master program divides up tasks based on location of data: tries to have map tasks on same machine as physical file data, or at least same rack • Map task inputs are divided into 64—128 MB blocks: same size as filesystem chunks – Process components of a single file in parallel Fault Tolerance • Tasks designed for independence • Master detects worker failures • Master re-executes tasks that fail while in progress • Restarting one task does not require communication with other tasks • Data is replicated to increase availability, durability How MapReduce is Structured • Functional programming meets distributed computing • A batch data processing system • Factors out many reliability concerns from application logic
  • 3. 7/16/2014 3 Functional Programming Review • Functional operations do not modify data structures: They always create new ones • Original data still exists in unmodified form • Data flows are implicit in program design • Order of operations does not matter Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) Order of sum() and mul(), etc does not matter – they do not modify l “Updates” Don’t Modify Structures fun append(x, lst) = let lst' = reverse lst in reverse ( x :: lst' ) The append() function above reverses a list, adds a new element to the front, and returns all of that, reversed, which appends an item. But it never modifies lst! Functions Can Be Used As Arguments fun DoDouble(f, x) = f (f x) It does not matter what f does to its argument; DoDouble() will do it twice.
  • 4. 7/16/2014 4 Map Creates a new list by applying f to each element of the input list; returns output in order. f f f f f f Fold Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list f f f f f returned initial Example fun foo(l: int list) = sum(l) + mul(l) + length(l) How can we implement this? Example (Solved) fun foo(l: int list) = sum(l) + mul(l) + length(l) fun sum(lst) = foldl (fn (x,a)=>x+a) fun mul(lst) = foldl (fn (x,a)=>x*a) fun length(lst) = foldl (fn (x,a)=>1+a)
  • 5. 7/16/2014 5 Implicit Parallelism In map • In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements • If order of application of f to elements in list is commutative, we can reorder or parallelize execution • This is the “secret” that MapReduce exploits Motivation: Large Scale Data Processing • Want to process lots of data ( > 1 TB) • Want to parallelize across hundreds/thousands of CPUs • … Want to make this easy MapReduce • Automatic parallelization & distribution • Fault-tolerant • Provides status and monitoring tools • Clean abstraction for programmers Programming Model • Borrows from functional programming • Users implement interface of two functions: – map (in_key, in_value) -> (out_key, intermediate_value) list – reduce (out_key, intermediate_value list) -> out_value list
  • 6. 7/16/2014 6 map (in_key, in_value) -> (out_key, intermediate_value) list map reduce • After the map phase is over, all the intermediate values for a given output key are combined together into a list • reduce() combines those intermediate values into one or more final values for that same output key • (in practice, usually only one final value per key) reduce reduce (intermediate_key, int_value list) -> (out_key, out_value) list returned initial Example: Filter Mapper let map(k, v) = if (isPrime(v)) then emit(k, v) (“foo”, 7)  (“foo”, 7) (“test”, 10)  (nothing)
  • 7. 7/16/2014 7 Example: Sum Reducer let reduce(k, vals) = sum = 0 foreach int v in vals: sum += v emit(k, sum) (“A”, [42, 100, 312])  (“A”, 454) (“B”, [12, 6, -2])  (“B”, 16) Data store 1 Data store n map (key 1, values...) (key 2, values...) (key 3, values...) map (key 1, values...) (key 2, values...) (key 3, values...) Input key*value pairs Input key*value pairs == Barrier == : Aggregates intermediate values by output key reduce reduce reduce key 1, intermediate values key 2, intermediate values key 3, intermediate values final key 1 values final key 2 values final key 3 values ... Parallelism • map() functions run in parallel, creating different intermediate values from different input data sets • reduce() functions also run in parallel, each working on a different output key • All values are processed independently • Bottleneck: reduce phase can’t start until map phase is completely finished. Example: Count word occurrences map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));
  • 8. 7/16/2014 8 Example vs. Actual Source Code • Example is written in pseudo-code • Actual implementation is in C++, using a MapReduce library • Bindings for Python and Java exist via interfaces • True code is somewhat more involved (defines how the input key/values are divided up and accessed, etc.) Locality • Master program divvies up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack • map() task inputs are divided into 64 MB blocks: same size as Google File System chunks Fault Tolerance • Master detects worker failures – Re-executes completed & in-progress map() tasks – Re-executes in-progress reduce() tasks • Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. – Effect: Can work around bugs in third-party libraries! Optimizations • No reduce can start until map is complete: – A single slow disk controller can rate-limit the whole process • Master redundantly executes “slow-moving” map tasks; uses results of first copy to finish
  • 9. 7/16/2014 9 Combining Phase • Run on mapper nodes after map phase • “Mini-reduce,” only on local map output • Used to save bandwidth before sending data to full reducer • Reducer can be combiner if commutative & associative Combiner, graphically Combiner replaces with: Map output To reducer On one mapper machine: To reducer Word Count Example redux map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, 1); reduce(String output_key, Iterator<int> intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += v; Emit(result); wordcount public void map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter) throws IOException { String line = ((UTF8)value).toString(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); output.collect(word, one);} } public void reduce(WritableComparable key, Iterator values, OutputCollector output,Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += ((IntWritable) values.next()).get(); } output.collect(key, new IntWritable(sum)); }
  • 10. 7/16/2014 10 Distributed “Tail Recursion” • MapReduce doesn’t make infinite scalability automatic. • Is word count infinitely scalable? Why (not)? What About This? UniqueValuesReducer(K key, iter<V> values) { Set<V> seen = new HashSet<V>(); for (V val : values) { if (!seen.contains(val)) { seen.put(val); emit (key, val); } } } A Scalable Implementation KeyifyMapper(K key, V val) { emit ((key, val), 1); } IgnoreValuesCombiner(K key, iter<V> values) { emit (key, 1); } UnkeyifyReducer(K key, iter<V> values) { let (k', v') = key; emit (k', v'); } MapReduce Conclusions • MapReduce has proven to be a useful abstraction • Greatly simplifies large-scale computations at Google • Functional programming paradigm can be applied to large-scale applications • Fun to use: focus on problem, let library deal w/ messy details
  • 11. 7/16/2014 11 Hadoop Introduction MapReduce/Hadoop Google calls it: Hadoop equivalent: MapReduce Hadoop GFS HDFS Bigtable HBase Chubby Zookeeper Some MapReduce Terminology • Job – A “full program” - an execution of a Mapper and Reducer across a data set • Task – An execution of a Mapper or a Reducer on a slice of data – a.k.a. Task-In-Progress (TIP) • Task Attempt – A particular instance of an attempt to execute a task on a machine MapReduce: High Level JobTracker MapReduce job submitted by client computer Master node TaskTracker Slave node Task instance TaskTracker Slave node Task instance TaskTracker Slave node Task instance
  • 12. 7/16/2014 12 Job Distribution • MapReduce programs are contained in a Java “jar” file + an XML file containing serialized program configuration options • Running a MapReduce job places these files into the HDFS and notifies TaskTrackers where to retrieve the relevant program code • … Where’s the data distribution? Data Distribution • Implicit in design of MapReduce! – All mappers are equivalent; so map whatever data is local to a particular node in HDFS • If lots of data does happen to pile up on the same node, nearby nodes will map instead – Data transfer is handled implicitly by HDFS What Happens In MapReduce? Depth First Job Launch Process: Client • Client program creates a JobConf – Identify classes implementing Mapper and Reducer interfaces • JobConf.setMapperClass(), setReducerClass() – Specify inputs, outputs • FileInputFormat.addInputPath(), • FileOutputFormat.setOutputPath() – Optionally, other options too: • JobConf.setNumReduceTasks(), JobConf.setOutputFormat()…
  • 13. 7/16/2014 13 Job Launch Process: JobClient • Pass JobConf to JobClient.runJob() or submitJob() – runJob() blocks, submitJob() does not • JobClient: – Determines proper division of input into InputSplits – Sends job data to master JobTracker server Job Launch Process: JobTracker • JobTracker: – Inserts jar and JobConf (serialized to XML) in shared location – Posts a JobInProgress to its run queue Job Launch Process: TaskTracker • TaskTrackers running on slave nodes periodically query JobTracker for work • Retrieve job-specific jar and config • Launch task in separate instance of Java Job Launch Process: TaskRunner • TaskRunner, MapTaskRunner, MapRunner work in a daisy-chain to launch your Mapper – Task knows ahead of time which InputSplits it should be mapping – Calls Mapper once for each record retrieved from the InputSplit • Running the Reducer is much the same
  • 14. 7/16/2014 14 Mapper • void map(K1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter) • K types implement WritableComparable • V types implement Writable What is Writable? • Hadoop defines its own classes for strings (Text), integers (IntWritable), etc. • All values are instances of Writable • All keys are instances of WritableComparable Getting Data To The Mapper Input file InputSplit InputSplit InputSplit InputSplit Input file RecordReader RecordReader RecordReader RecordReader Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) InputFormat Data: Stream of keys and values are 1 Hi 1 how 1 You 1 are 1 Hello 1 Hello 1 how 1 You 1 are 2 Hello 2 Hi 1 how 2 you 2 Input OutputIntermediate results Map Reduce <0>Hi how are you <100>I am good <0>Hello Hello how are you <105>Not so good Sorted are [1 1] Hello [1 1] Hi [1] how [1 1] you [1 1] Merged
  • 15. 7/16/2014 15 wordcount public void map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter) throws IOException { String line = ((UTF8)value).toString(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); output.collect(word, one);} } public void reduce(WritableComparable key, Iterator values, OutputCollector output,Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += ((IntWritable) values.next()).get(); } output.collect(key, new IntWritable(sum)); } • public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); System.out.println("Started Job"); Job job = new Job(conf, "wordcount"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 :1); Reading Data • Data sets are specified by InputFormats – Defines input data (e.g., a directory) – Identifies partitions of the data that form an InputSplit – Factory for RecordReader objects to extract (k, v) records from the input source FileInputFormat and Friends • TextInputFormat – Treats each ‘n’-terminated line of a file as a value • KeyValueTextInputFormat – Maps ‘n’- terminated text lines of “k SEP v” • SequenceFileInputFormat – Binary file of (k, v) pairs with some add’l metadata • SequenceFileAsTextInputFormat – Same, but maps (k.toString(), v.toString())
  • 16. 7/16/2014 16 Filtering File Inputs • FileInputFormat will read all files out of a specified directory and send them to the mapper • Delegates filtering this file list to a method subclasses may override – e.g., Create your own “xyzFileInputFormat” to read *.xyz from directory list Record Readers • Each InputFormat provides its own RecordReader implementation – Provides (unused?) capability multiplexing • LineRecordReader – Reads a line from a text file • KeyValueRecordReader – Used by KeyValueTextInputFormat Input Split Size • FileInputFormat will divide large files into chunks – Exact size controlled by mapred.min.split.size • RecordReaders receive file, offset, and length of chunk • Custom InputFormat implementations may override split size – e.g., “NeverChunkFile” Sending Data To Reducers • Map function receives OutputCollector object – OutputCollector.collect() takes (k, v) elements • Any (WritableComparable, Writable) can be used • By default, mapper output type assumed to be same as reducer output type
  • 17. 7/16/2014 17 WritableComparator • Compares WritableComparable data – Will call WritableComparable.compare() – Can provide fast path for serialized data • JobConf.setOutputValueGroupingComparator() Partition And Shuffle Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) Reducer Reducer Reducer (intermediates) (intermediates) (intermediates) Partitioner Partitioner Partitioner Partitioner shuffling Partitioner • int getPartition(key, val, numPartitions) – Outputs the partition number for a given key – One partition == values sent to one Reduce task • HashPartitioner used by default – Uses key.hashCode() to return partition num • JobConf sets Partitioner implementation Reduction • reduce( K2 key, Iterator<V2> values, OutputCollector<K3, V3> output, Reporter reporter) • Keys & values sent to one partition all go to the same reduce task • Calls are sorted by key – “earlier” keys are reduced and output before “later” keys
  • 18. 7/16/2014 18 Finally: Writing The Output Reducer Reducer Reducer RecordWriter RecordWriter RecordWriter output file output file output file OutputFormat OutputFormat • Analogous to InputFormat • TextOutputFormat – Writes “key valn” strings to output file • SequenceFileOutputFormat – Uses a binary format to pack (k, v) pairs • NullOutputFormat – Discards output Pig Latin Reference: Programming Pig by Alan Gates 72 Pig  Too many lines of Hadoop code even for simple logic  How many lines do you have for word count?  High level dataflow language (Pig Latin)  Much simpler than Java: 10 lines of Pig Latin V.S 200 lines in Java  Simplifies the data processing  Framework for analyzing large un-structured and semi-structured data on top of Hadoop. – Pig Engine Parses, compiles Pig Latin scripts into MapReduce jobs run on top of Hadoop. – Pig Latin is declarative, SQL-like language; the high level language interface for Hadoop.
  • 19. 7/16/2014 19 73 Pig runs over Hadoop Who uses Pig? • 70% of production jobs at Yahoo (10ks per day) • Twitter, LinkedIn, Ebay, AOL,… • Used to – Process web logs – Build user behavior models – Process images – Build maps of the web – Do research on large data sets Word Count using MapReduce Word Count using Pig Lines=LOAD ‘input/hadoop.log’ AS (line: chararray); Words = FOREACH Lines GENERATE FLATTEN(TOKENIZE(line)) AS word; Groups = GROUP Words BY word; Counts = FOREACH Groups GENERATE group, COUNT(Words); Results = ORDER Words BY Counts DESC; Top5 = LIMIT Results 5; STORE Top5 INTO /output/top5words;
  • 20. 7/16/2014 20 77 Pig Join  Join in Pig – Various algorithms are already available. – Some of them are generic to support multi-way join – No need to consider integration into Select-Project-Join- Aggregate (SPJA) workflow. A = LOAD 'input/join/A'; B = LOAD 'input/join/B'; C = JOIN A BY $0, B BY $1; DUMP C; Join 79 Motivation by Example  Suppose we have user data in one file, website data in another file.  We need to find the top 5 most visited pages by users aged 18-25 80 In Pig Latin
  • 21. 7/16/2014 21 81 Map Data  How to map the data to records – By default, one line → one record – User can customize the loading process  How to identify attributes and map them to the schema – Delimiter to separate different attributes – By default, delimiter is tab. Customizable. Pig Data Types • Scalar Types: – Int, long, float, double, boolean, null, chararray, bytearry; • Complex Types: fields, tuples, bags, relations; – A Field is a piece of data – A Tuple is an ordered set of fields – A Bag is a collection of tuples – A Relation is a bag • Samples: – Tuple  Row in Database • ( 0002576169, Tome, 20, 4.0) – Bag  Table or View in Database {(0002576169 , Tome, 20, 4.0), (0002576170, Mike, 20, 3.6), (0002576171 Lucy, 19, 4.0), …. } Pig Operations • Loading data – LOAD loads input data – Lines=LOAD ‘input/access.log’ AS (line: chararray); • Projection – FOREACH … GENERTE … (similar to SELECT) – takes a set of expressions and applies them to every record. • Grouping – GROUP collects together records with the same key • Dump/Store – DUMP displays results to screen, STORE save results to file system • Aggregation – AVG, COUNT, MAX, MIN, SUM Pig Operations • Pig Data Loader – PigStorage: loads/stores relations using field-delimited text format – TextLoader: loads relations from a plain-text format – BinStorage:loads/stores relations from or to binary files – HBaseStorage: loads/stores from HBase – PigDump: stores relations by writing the toString() representation of tuples, one per line students = load 'student.txt' using PigStorage(‘,') as (studentid: int, name:chararray, age:int, gpa:double); (John, 18, 4.0F) (Mary, 19, 3.8F) (Bill, 20,3.9F)
  • 22. 7/16/2014 22 85 Schema  User can optionally define the schema of the input data  Once the schema of the source data is given, the schema of the intermediate relation will be induced by Pig  Schema 1 A = LOAD 'input/A' as (name:chararray, age:int); B = FILTER A BY age == 20;  Schema 2 A = LOAD 'input/A' as (name:chararray, age:chararray); B = FILTER A BY age != '20'; 86 Date Types Pig Operations - Foreach • Foreach ... Generate – The Foreach … Generate statement iterates over the members of a bag – The result of a Foreach is another bag – Elements are named as in the input bag studentid = FOREACH students GENERATE studentid, name; Pig Operations – Positional Reference • Fields are referred to by positional notation or by name (alias). First Field Second Field Third Field Data Type chararray int float Position notation $0 $1 $2 Name (variable) name age Gpa Field value John 18 4.0 students = LOAD 'student.txt' USING PigStorage() AS (name:chararray, age:int, gpa:float); DUMP A; (John,18,4.0F) (Mary,19,3.8F) (Bill,20,3.9F) studentname = Foreach students Generate $1 as studentname;
  • 23. 7/16/2014 23 Pig Operations- Group • Groups the data in one or more relations – Collect records with the same key into a bag. daily = load ‘NYSE_daily’ as (exchange, stock); grpd = group daily by stock; Store grpd into ‘by_group’; Pig Operations – Dump&Store • DUMP Operator: – display output results, will always trigger execution • STORE Operator: – Pig will parse entire script prior to writing for efficiency purposes daily = load ‘NYSE_daily’ as (exchange, stock); grpd = group daily by stock; Store grpd into ‘by_group’ using PigStorage(‘,’); Pig Operations - Count • Compute the number of elements in a bag • Use the COUNT function to compute the number of elements in a bag. • COUNT requires a preceding GROUP ALL statement for global counts and GROUP BY statement for group counts. X = FOREACH B GENERATE COUNT(A); Pig Operation - Order • Sorts a relation based on one or more fields • In Pig, relations are unordered. If you order relation A to produce relation X relations A and X still contain the same elements. student = ORDER students BY gpa DESC;
  • 24. 7/16/2014 24 93 Operators  Diagnostic Operators – Show the status/metadata of the relations – Used for debugging – Will not be integrated into execution plan – DESCRIBE, EXPLAIN, ILLUSTRATE. 94 Functions  Built-in Functions – Hard-coded routines offered by Pig.  User Defined Function (UDF) – Supports customized functionalities – Piggy Bank, a warehouse for UDFs How to run Pig Latin scripts • Local mode – Local host and local file system is used – Neither Hadoop nor HDFS is required – Useful for prototyping and debugging • MapReduce mode – Run on a Hadoop cluster and HDFS • Batch mode - run a script directly – Pig –x local my_pig_script.pig – Pig –x mapreduce my_pig_script.pig • Interactive mode use the Pig shell to run script – Grunt> Lines = LOAD ‘/input/input.txt’ AS (line:chararray); – Grunt> Unique = DISTINCT Lines; – Grunt> DUMP Unique; 96 Pig Execution Modes • Local mode – Launch single JVM – Access local file system – No MR job running • Hadoop mode – Execute a sequence of MR jobs – Pig interacts with Hadoop master node
  • 25. 7/16/2014 25 97 CompilationCompilation Hive • Hive: data warehousing application in Hadoop – Query language is HQL, variant of SQL – Tables stored on HDFS as flat files – Developed by Facebook, now open source Programming Hive by Edward Capriolo, etc https://cwiki.apache.org/confluence/display/Hi ve/Tutorial HIVE Architecture 7/16/2014 99 HIVE - A warehouse solution over Map Reduce Framework Hive Database Tables ○ Analogous to tables in relational database ○ Each table has a corresponding HDFS dir ○ Data is serialized and stored in files within dir ○ Support external tables on data stored in HDFS, NFS or local directory. Partitions ○ @table can have 1 or more partitions (1-level) which determine the distribution of data within subdirectories of table directory. 100
  • 26. 7/16/2014 26 HIVE Database cont. e.g : Table T under /wh/T and is partitioned on column ds + ctry For ds=20140101 ctry=US Then data is stored within dir /wh/T/ds=20090101/ctry=US – Buckets • Data in each partition are divided into buckets based on hash of a column in the table. Each bucket is stored as a file in the partition directory. 101 hiveQL • Support SQL-like query language called HiveQL for select,join, aggregate, union all and sub- query in the from clause • Support DDL stmt such as CREATE table with serialization format, partitioning and bucketing columns • Command to load data from external sources and INSERT into HIVE tables. • DO NOT support UPDATE and DELETE 102 Data Model • Tables – Typed columns (int, float, string, boolean) – Also, list: map (for JSON-like data) • Partitions – For example, range-partition tables by date • Buckets – Hash partitions within ranges (useful for sampling, join optimization) Source: cc-licensed slide by Cloudera Examples – DDL Operations CREATE TABLE sample (foo INT, bar STRING) { PARTITIONED BY (ds STRING) }; DESCRIBE sample; ALTER TABLE sample ADD COLUMNS (new_col INT); DROP TABLE sample;
  • 27. 7/16/2014 27 CREATE TABLE page_view(viewTime INT, userid BIGINT, page_url STRING, referrer_url STRING, ip STRING COMMENT 'IP Address of the User') COMMENT 'This is the page view table' PARTITIONED BY(country STRING) STORED AS SEQUENCEFILE; Examples – DML Operations LOAD DATA LOCAL INPATH './sample.txt' OVERWRITE INTO TABLE sample PARTITION (ds='2012-02-24'); LOAD DATA INPATH '/user/shim/hive/sample.txt' OVERWRITE INTO TABLE sample PARTITION (ds='2012-02-24'); SELECTS and FILTERS SELECT foo FROM sample WHERE ds='2012-02-24'; INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT * FROM sample WHERE ds='2012-02-24'; INSERT OVERWRITE LOCAL DIRECTORY '/tmp/hive- sample-out' SELECT * FROM sample; Aggregations and Groups SELECT MAX(foo) FROM sample; SELECT ds, COUNT(*), SUM(foo) FROM sample GROUP BY ds;
  • 28. 7/16/2014 28 Join SELECT * FROM customer c JOIN order_cust o ON (c.id=o.cus_id); SELECT c.id,c.name,c.address,ce.exp FROM customer c JOIN (SELECT cus_id,sum(price) AS exp FROM order_cust GROUP BY cus_id) ce ON (c.id=ce.cus_id); CREATE TABLE customer (id INT,name STRING,address STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '#'; CREATE TABLE order_cust (id INT,cus_id INT,prod_id INT,price INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't'; Hive: Example • Relational join on two tables: – Table of word counts from Shakespeare and Bible collection SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; the 25848 62394 I 23031 8854 and 19671 38985 to 18038 13526 of 16700 34654 a 14170 8057 you 12702 2720 my 11297 4135 Hive: Behind the Scenes SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10))) (one or more of MapReduce jobs) (Abstract Syntax Tree) References: 1. http://pig.apache.org (Pig official site) 2. http://hive.apache.org (Hive official site) 3. Pig Docs: http://pig.apache.org/docs/r0.9.0 4. Hive Docs: https://cwiki.apache.org/confluence/display/Hive/Home#Home- UserDocumentation 5. Hive Tutorial: https://cwiki.apache.org/confluence/display/Hive/Tutorial 6. Pig Papers: http://wiki.apache.org/pig/PigTalksPapers 7. http://en.wikipedia.org/wiki/Pig_Latin http://en.wikipedia.org/wiki/Apache_Hive 8. Hive – A petabyte scale data warehouse using hadoop infolab.stanford.edu/~ragho/hive-icde2010.pdf‎
  • 29. 7/16/2014 29 Recommendation Engine Simon Shim Partial contents from K. Han’s presentation Data Mining • Mining patterns from Data • Statistics? • Machine learning? 115 Data Mining in Use • The US Government uses Data Mining to track fraud • A Supermarket becomes an information broker • Basketball teams use it to track game strategy • Recommend similar items • Holding on to Good Customers • Weeding out Bad Customers Examples of data mining Frequently bought together Movie recommendation
  • 30. 7/16/2014 30 examples Heart Monitoring Genome Mining Keyword search Data Mining • Frequent pattern mining • Machine learning – Supervised – Unsupervised • Recommendation system • Graph mining • Unstructured data • Big data • Stream mining Frequent Pattern Mining ? Process
  • 31. 7/16/2014 31 Machine Learning • Supervised • Unsupervised (Clustering) Binary Classification Checking Duration Savings (years) ($k) Current Loans Loan Purpose Risky? Yes 1 10 Yes TV 0 Yes 2 4 No TV 1 No 5 75 No Car 0 Yes 10 66 No Car 1 Yes 5 83 Yes Car 0 Yes 1 11 No TV 0 Yes 4 99 Yes Car 0 Decision Tree Neural Network • Perceptron multi-layer NN
  • 32. 7/16/2014 32 SupportVector Machine (SVM) SVM Spam Filter • Transmission – IP address --167.12.24.555 – Sender URL -- spam.com • Email header – From --“[email protected]” – To --“undisclosed” • Email Body – # of paragraphs – # words • Email structure – # of attachments – # of links Regression • Linear regression Non-linear regression • Application – Stock price prediction – Credit scoring – Employment forecast Logistic regression
  • 33. 7/16/2014 33 Supervised learning machine learning Graph Analysis • Friend Recommendation Drug discovery Twitter analysis • TwitterRank: PageRank approach
  • 34. 7/16/2014 34 Facebook Graph Search • Restaurants liked by my friend Prediction • Rating prediction – Given how an user rated other items, predict the user’s rating for a given item • Top-N Recommendation – Given the list of items liked by an user, recommend new items that the user might like Feedback data • Explicit feedback – Ratings, reviews • Implicit feedback – Purchase behavior: frequency, recency, – Browsing behavior: # of visits, time of visit, stay duration, clicks, Data Analysis Example • Carl Morris (Harvard statistics professor) used Markov Chain • Baseball: discrete game • There are four states • State (0, 0, 0, 0) : (out count, reach 1st base, reach 2nd base, reach 3rd base) • Then there are 3x2x2x2 = 24 states • Each inning starts as (0,0,0,) and ends as (3,0,0,0)
  • 35. 7/16/2014 35 NERV MoneyBall • Oakland Athletics – Lowest team salary in 2002 : 41 million – Difficult to recruit good players – Paul DePodesta: assistant to general manager Billy Beane • 1999 – 2003: advanced to post season 4 years in a row • Baseball management based on statistics and scientific data Case Study: Item Based Recommendation • Using meta data from item, compute similarity between items – Description, price, category – Normalize these into a feature vector – N-dimension • Computer the distance between vectors – Euclidean distance score – Cosine similarity score – Pearson correlation score
  • 36. 7/16/2014 36 Architecture Item Based Recommendation • Collaborative Filtering – Leverage users’ collective intelligence – Similar users tend to like similar items – Amazon’s product recommendation is a good example How to implement Collaborative Filtering • Construct co-occurrence matrix (item similarity matrix) – Increment S[i,j] and S[j,i] if item I and item j are liked by the same user – Repeat this for all users • For item k, find the most co-occurred items from matrix as recommendation Item Based Similarity Out of Africa Star Wars Air Force One Liar, Liar John 4 4 5 1 Adam 1 1 2 5 Laura ? 4 5 2
  • 37. 7/16/2014 37 User Based Recommendation • First group users into different clusters – Represent users as feature vectors – Information about users • Geo-location, gender, age, … – Items users liked (rated) – K-nearest neighbors (KNN) is used • From each cluster, find representative items – Graph traversal – Highest rated items – Most liked items User Based Similarity • User’s movie rating Out of Africa Star Wars Air Force One Liar, Liar John 4 4 5 1 Adam 1 1 2 5 Laura ? 4 5 2 Latent Factors Challenges • Cold starter – For new users/items, no information • Sparse Data – Item reviews are not there • Scalability Issue – Big data means more computation
  • 38. 7/16/2014 38 What is Mahout? • Open source machine learning library – Supports mapreduce • Recommendation/collaborative filtering • Classification: Supervised Learning • Clustering: Unsupervised Learning Personalized Recommendation • Build co-occurrence (S) matrix for items • Build a preference vector (P) for user • Multiply both matrix R = S x P • Sort the final vector from R Example • N items and M users • Create S (N x N) • Create P (1 x N) (all P(i) liked by a user) • S X P • Sort results by the score Example • Co-occurrence matrix (NxN) Preference vector(1xN) 1 X 2 4 … X 1 2 3 1 X X 100,000 movies Sort 5 1 1 x … x x x x x … x 1 x 1 x … 1 x 4 2 x … x x x x x … x … x 1 x … 1 x 1 1 x … x x x x 4 … x x 2 5 x … 1 2 1 1 x … x 3 6 x 4 … x x x 1 x … 1 100,000 movies Userid [itemid, score), (itemid, score), … …
  • 39. 7/16/2014 39 Similarity • Co-occurrence • Log likelihood • Location based • Gender, age • Cosine similarity • Euclidean distance What is key? • Understand business domain • Garbage in, garbage out – Filtering, cleaning • One size does not fit all • Start with simple way and improve • Create automation for experiments and tweaks