SlideShare a Scribd company logo
Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License.
Outline Lisp/ML map/fold review MapReduce overview
Functional Programming Review Functional operations do not modify data structures: They always create new ones  Original data still exists in unmodified form Data flows are implicit in program design Order of operations does not matter
Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) Order of sum() and mul(), etc does not matter – they do not modify  l
Functional Updates Do Not Modify Structures fun append(x, lst) =  let lst' = reverse lst in reverse ( x :: lst' ) The append() function above reverses a list, adds a new element to the front, and returns all of that, reversed, which appends an item.  But it  never modifies lst !
Functions Can Be Used As Arguments fun DoDouble(f, x) = f (f x) It does not matter what f does to its argument; DoDouble() will do it twice. What is the type of this function?
Map map f lst: (’a->’b) -> (’a list) -> (’b list) Creates a new list by applying f to each element of the input list; returns output in order.
Fold fold f x 0  lst: ('a*'b->'b)->'b->('a list)->'b Moves across a list, applying  f  to each element plus an  accumulator . f returns the next accumulator value, which is combined with the next element of the list
fold left vs. fold right Order of list elements can be significant Fold left moves left-to-right across the list Fold right moves from right-to-left SML Implementation: fun foldl f a []  = a | foldl f a (x::xs) = foldl f (f(x, a)) xs fun foldr f a []  = a | foldr f a (x::xs) = f(x, (foldr f a xs))
Example fun foo(l: int list) = sum(l) + mul(l) + length(l) How can we implement this?
Example (Solved) fun foo(l: int list) = sum(l) + mul(l) + length(l) fun sum(lst) = foldl (fn (x,a)=>x+a) 0 lst fun mul(lst) = foldl (fn (x,a)=>x*a) 1 lst fun length(lst) = foldl (fn (x,a)=>1+a) 0 lst
A More Complicated Fold Problem Given a list of numbers, how can we generate a list of partial sums? e.g.:  [1, 4, 8, 3, 7, 9]     [0, 1, 5, 13, 16, 23, 32]
A More Complicated Map Problem Given a list of words, can we: reverse the letters in each word, and reverse the whole list, so it all comes out backwards? [“my”, “happy”, “cat”] -> [“tac”, “yppah”, “ym”]
map Implementation This implementation moves left-to-right across the list, mapping elements one at a time …  But does it need to? fun map f []  = [] | map f (x::xs) = (f x) :: (map f xs)
Implicit Parallelism In map In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements If order of application of  f  to elements in list is  commutative , we can reorder or parallelize execution This is the “secret” that MapReduce exploits
MapReduce
Motivation: Large Scale Data Processing Want to process lots of data ( > 1 TB) Want to parallelize across hundreds/thousands of CPUs …  Want to make this easy
MapReduce Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers
Programming Model Borrows from functional programming Users implement interface of two functions: map  (in_key, in_value) ->  (out_key, intermediate_value) list reduce (out_key, intermediate_value list) -> out_value list
map Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line). map() produces one or more  intermediate  values along with an output key from the input.
reduce After the map phase is over, all the intermediate values for a given output key are combined together into a list reduce() combines those intermediate values into one or more  final values  for that same output key  (in practice, usually only one final value per key)
 
Parallelism map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed  independently Bottleneck: reduce phase can’t start until map phase is completely finished.
Example: Count word occurrences map(String input_key, String input_value): // input_key: document name  // input_value: document contents  for each  word w  in  input_value:  EmitIntermediate (w, "1");  reduce(String output_key, Iterator intermediate_values):  // output_key: a word  // output_values: a list of counts  int  result = 0;  for each  v  in  intermediate_values:  result += ParseInt(v); Emit (AsString(result));
Example vs. Actual Source Code Example is written in pseudo-code Actual implementation is in C++, using a MapReduce library Bindings for Python and Java exist via interfaces True code is somewhat more involved (defines how the input key/values are divided up and accessed, etc.)
Locality Master program divvies up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks
Fault Tolerance Master detects worker failures Re-executes completed & in-progress map() tasks Re-executes in-progress reduce() tasks Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. Effect: Can work around bugs in third-party libraries!
Optimizations No reduce can start until map is complete: A single slow disk controller can rate-limit the whole process Master redundantly executes “slow-moving” map tasks; uses results of first copy to finish Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?
Optimizations “ Combiner” functions can run on same machine as a mapper Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth Under what conditions is it sound to use a combiner?
MapReduce Conclusions MapReduce has proven to be a useful abstraction  Greatly simplifies large-scale computations at Google  Functional programming paradigm can be applied to large-scale applications Fun to use: focus on problem, let library deal w/ messy details

More Related Content

What's hot (20)

PPTX
Map and Reduce
Christopher Schleiden
 
PPTX
Map Reduce Online
Hadoop User Group
 
PPTX
Introduction to MapReduce
Chicago Hadoop Users Group
 
PDF
Data Visualization With R
Rsquared Academy
 
PPT
R studio
Kinza Irshad
 
PPTX
06 how to write a map reduce version of k-means clustering
Subhas Kumar Ghosh
 
PPTX
Mapreduce
Humera Shaikh
 
PPTX
SGF15--e-Poster-Shevrin-3273
ThomsonReuters
 
PPTX
Join optimization in hive
Liyin Tang
 
PPTX
MapReduce : Simplified Data Processing on Large Clusters
Abolfazl Asudeh
 
PDF
MapReduce: Simplified Data Processing on Large Clusters
Ashraf Uddin
 
PPTX
Mapfilterreducepresentation
ManjuKumara GH
 
PDF
Hash map (java platform se 8 )
charan kumar
 
PDF
Introduction to map reduce
Bhupesh Chawda
 
PDF
Optimization for iterative queries on Mapreduce
makoto onizuka
 
PPTX
Stack & heap
Shajahan T S Shah
 
PPTX
Map algebra
Ehsan Hamzei
 
PDF
Data Visualization With R: Learn To Combine Multiple Graphs
Rsquared Academy
 
PDF
Mi primer map reduce
betabeers
 
PDF
Mi primer map reduce
Ruben Orta
 
Map and Reduce
Christopher Schleiden
 
Map Reduce Online
Hadoop User Group
 
Introduction to MapReduce
Chicago Hadoop Users Group
 
Data Visualization With R
Rsquared Academy
 
R studio
Kinza Irshad
 
06 how to write a map reduce version of k-means clustering
Subhas Kumar Ghosh
 
Mapreduce
Humera Shaikh
 
SGF15--e-Poster-Shevrin-3273
ThomsonReuters
 
Join optimization in hive
Liyin Tang
 
MapReduce : Simplified Data Processing on Large Clusters
Abolfazl Asudeh
 
MapReduce: Simplified Data Processing on Large Clusters
Ashraf Uddin
 
Mapfilterreducepresentation
ManjuKumara GH
 
Hash map (java platform se 8 )
charan kumar
 
Introduction to map reduce
Bhupesh Chawda
 
Optimization for iterative queries on Mapreduce
makoto onizuka
 
Stack & heap
Shajahan T S Shah
 
Map algebra
Ehsan Hamzei
 
Data Visualization With R: Learn To Combine Multiple Graphs
Rsquared Academy
 
Mi primer map reduce
betabeers
 
Mi primer map reduce
Ruben Orta
 

Similar to Mapreduce: Theory and implementation (20)

PPT
Lec2 Mapred
mobius.cn
 
PPT
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
tugrulh
 
PDF
Map Reduce
Sri Prasanna
 
PPTX
Map reduce presentation
Ahmad El Tawil
 
PDF
Mapreduce2008 cacm
lmphuong06
 
PPT
Map Reduce
Sri Prasanna
 
PDF
Map reduce
xydii
 
PDF
Map reduceoriginalpaper mandatoryreading
coolmirza143
 
PDF
Mapreduce Osdi04
Jyotirmoy Dey
 
PDF
Lecture 1 mapreduce
Shubham Bansal
 
PDF
2004 map reduce simplied data processing on large clusters (mapreduce)
anh tuan
 
PDF
Map reduce
Shahbaz Sidhu
 
PDF
Large Scale Data Processing & Storage
Ilayaraja P
 
PPT
Introduction To Map Reduce
rantav
 
PDF
Mapreduce - Simplified Data Processing on Large Clusters
Abhishek Singh
 
PPTX
Map reduce presentation
ateeq ateeq
 
PDF
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
TSANKARARAO
 
PPTX
Big Data.pptx
NelakurthyVasanthRed1
 
PDF
MapReduce basics
Harisankar H
 
PPT
Hadoop Map Reduce
VNIT-ACM Student Chapter
 
Lec2 Mapred
mobius.cn
 
Distributed Computing Seminar - Lecture 2: MapReduce Theory and Implementation
tugrulh
 
Map Reduce
Sri Prasanna
 
Map reduce presentation
Ahmad El Tawil
 
Mapreduce2008 cacm
lmphuong06
 
Map Reduce
Sri Prasanna
 
Map reduce
xydii
 
Map reduceoriginalpaper mandatoryreading
coolmirza143
 
Mapreduce Osdi04
Jyotirmoy Dey
 
Lecture 1 mapreduce
Shubham Bansal
 
2004 map reduce simplied data processing on large clusters (mapreduce)
anh tuan
 
Map reduce
Shahbaz Sidhu
 
Large Scale Data Processing & Storage
Ilayaraja P
 
Introduction To Map Reduce
rantav
 
Mapreduce - Simplified Data Processing on Large Clusters
Abhishek Singh
 
Map reduce presentation
ateeq ateeq
 
module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf
TSANKARARAO
 
Big Data.pptx
NelakurthyVasanthRed1
 
MapReduce basics
Harisankar H
 
Hadoop Map Reduce
VNIT-ACM Student Chapter
 
Ad

More from Sri Prasanna (20)

PDF
Qr codes para tech radar
Sri Prasanna
 
PDF
Qr codes para tech radar 2
Sri Prasanna
 
DOC
Test
Sri Prasanna
 
DOC
Test
Sri Prasanna
 
PDF
assds
Sri Prasanna
 
PDF
assds
Sri Prasanna
 
PDF
asdsa
Sri Prasanna
 
PDF
dsd
Sri Prasanna
 
PDF
About stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PDF
About Stacks
Sri Prasanna
 
PPT
Network and distributed systems
Sri Prasanna
 
PPT
Introduction & Parellelization on large scale clusters
Sri Prasanna
 
PPT
Other distributed systems
Sri Prasanna
 
PPT
Distributed file systems
Sri Prasanna
 
Qr codes para tech radar
Sri Prasanna
 
Qr codes para tech radar 2
Sri Prasanna
 
About stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
About Stacks
Sri Prasanna
 
Network and distributed systems
Sri Prasanna
 
Introduction & Parellelization on large scale clusters
Sri Prasanna
 
Other distributed systems
Sri Prasanna
 
Distributed file systems
Sri Prasanna
 
Ad

Recently uploaded (20)

PDF
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
PDF
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
PPTX
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
PPTX
Digital Circuits, important subject in CS
contactparinay1
 
PDF
UiPath DevConnect 2025: Agentic Automation Community User Group Meeting
DianaGray10
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PDF
NASA A Researcher’s Guide to International Space Station : Physical Sciences ...
Dr. PANKAJ DHUSSA
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
DOCX
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
PDF
UPDF - AI PDF Editor & Converter Key Features
DealFuel
 
PDF
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
PDF
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
PDF
Peak of Data & AI Encore AI-Enhanced Workflows for the Real World
Safe Software
 
PDF
The 2025 InfraRed Report - Redpoint Ventures
Razin Mustafiz
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
Edge AI and Vision Alliance
 
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
Edge AI and Vision Alliance
 
Digital Circuits, important subject in CS
contactparinay1
 
UiPath DevConnect 2025: Agentic Automation Community User Group Meeting
DianaGray10
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
NASA A Researcher’s Guide to International Space Station : Physical Sciences ...
Dr. PANKAJ DHUSSA
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
Python coding for beginners !! Start now!#
Rajni Bhardwaj Grover
 
UPDF - AI PDF Editor & Converter Key Features
DealFuel
 
Transforming Utility Networks: Large-scale Data Migrations with FME
Safe Software
 
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
Peak of Data & AI Encore AI-Enhanced Workflows for the Real World
Safe Software
 
The 2025 InfraRed Report - Redpoint Ventures
Razin Mustafiz
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
Edge AI and Vision Alliance
 

Mapreduce: Theory and implementation

  • 1. Lecture 2 – MapReduce: Theory and Implementation CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License.
  • 2. Outline Lisp/ML map/fold review MapReduce overview
  • 3. Functional Programming Review Functional operations do not modify data structures: They always create new ones Original data still exists in unmodified form Data flows are implicit in program design Order of operations does not matter
  • 4. Functional Programming Review fun foo(l: int list) = sum(l) + mul(l) + length(l) Order of sum() and mul(), etc does not matter – they do not modify l
  • 5. Functional Updates Do Not Modify Structures fun append(x, lst) = let lst' = reverse lst in reverse ( x :: lst' ) The append() function above reverses a list, adds a new element to the front, and returns all of that, reversed, which appends an item. But it never modifies lst !
  • 6. Functions Can Be Used As Arguments fun DoDouble(f, x) = f (f x) It does not matter what f does to its argument; DoDouble() will do it twice. What is the type of this function?
  • 7. Map map f lst: (’a->’b) -> (’a list) -> (’b list) Creates a new list by applying f to each element of the input list; returns output in order.
  • 8. Fold fold f x 0 lst: ('a*'b->'b)->'b->('a list)->'b Moves across a list, applying f to each element plus an accumulator . f returns the next accumulator value, which is combined with the next element of the list
  • 9. fold left vs. fold right Order of list elements can be significant Fold left moves left-to-right across the list Fold right moves from right-to-left SML Implementation: fun foldl f a [] = a | foldl f a (x::xs) = foldl f (f(x, a)) xs fun foldr f a [] = a | foldr f a (x::xs) = f(x, (foldr f a xs))
  • 10. Example fun foo(l: int list) = sum(l) + mul(l) + length(l) How can we implement this?
  • 11. Example (Solved) fun foo(l: int list) = sum(l) + mul(l) + length(l) fun sum(lst) = foldl (fn (x,a)=>x+a) 0 lst fun mul(lst) = foldl (fn (x,a)=>x*a) 1 lst fun length(lst) = foldl (fn (x,a)=>1+a) 0 lst
  • 12. A More Complicated Fold Problem Given a list of numbers, how can we generate a list of partial sums? e.g.: [1, 4, 8, 3, 7, 9]  [0, 1, 5, 13, 16, 23, 32]
  • 13. A More Complicated Map Problem Given a list of words, can we: reverse the letters in each word, and reverse the whole list, so it all comes out backwards? [“my”, “happy”, “cat”] -> [“tac”, “yppah”, “ym”]
  • 14. map Implementation This implementation moves left-to-right across the list, mapping elements one at a time … But does it need to? fun map f [] = [] | map f (x::xs) = (f x) :: (map f xs)
  • 15. Implicit Parallelism In map In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements If order of application of f to elements in list is commutative , we can reorder or parallelize execution This is the “secret” that MapReduce exploits
  • 17. Motivation: Large Scale Data Processing Want to process lots of data ( > 1 TB) Want to parallelize across hundreds/thousands of CPUs … Want to make this easy
  • 18. MapReduce Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers
  • 19. Programming Model Borrows from functional programming Users implement interface of two functions: map (in_key, in_value) -> (out_key, intermediate_value) list reduce (out_key, intermediate_value list) -> out_value list
  • 20. map Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line). map() produces one or more intermediate values along with an output key from the input.
  • 21. reduce After the map phase is over, all the intermediate values for a given output key are combined together into a list reduce() combines those intermediate values into one or more final values for that same output key (in practice, usually only one final value per key)
  • 22.  
  • 23. Parallelism map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed independently Bottleneck: reduce phase can’t start until map phase is completely finished.
  • 24. Example: Count word occurrences map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate (w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit (AsString(result));
  • 25. Example vs. Actual Source Code Example is written in pseudo-code Actual implementation is in C++, using a MapReduce library Bindings for Python and Java exist via interfaces True code is somewhat more involved (defines how the input key/values are divided up and accessed, etc.)
  • 26. Locality Master program divvies up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks
  • 27. Fault Tolerance Master detects worker failures Re-executes completed & in-progress map() tasks Re-executes in-progress reduce() tasks Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. Effect: Can work around bugs in third-party libraries!
  • 28. Optimizations No reduce can start until map is complete: A single slow disk controller can rate-limit the whole process Master redundantly executes “slow-moving” map tasks; uses results of first copy to finish Why is it safe to redundantly execute map tasks? Wouldn’t this mess up the total computation?
  • 29. Optimizations “ Combiner” functions can run on same machine as a mapper Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth Under what conditions is it sound to use a combiner?
  • 30. MapReduce Conclusions MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at Google Functional programming paradigm can be applied to large-scale applications Fun to use: focus on problem, let library deal w/ messy details