SlideShare a Scribd company logo
INTRODUCTION
HADOOP
ADMINISTRATION
Knowledgebee Trainings
HADOOP A DISTRIBUTED
FILE SYSTEM
1. Introduction: Hadoop’s history and advantages
2. Architecture in detail
3. Hadoop in industry
Apache top level project, open-source implementation of frameworks for reliable,
scalable, distributed computing and data storage.
It is a flexible and highly-available architecture for large scale computation and data
processing on a network of commodity hardware.
BRIEF HISTORY OF HADOOP
Designed to answer the question: “How to process big data
with reasonable cost and time?”
SEARCH ENGINES IN 1990
1997
1996
1998
2013
Doug Cutting
2005: Doug Cutting and Michael J. Cafarella developed
Hadoop to support distribution for the Nutch search engine
project.
The project was funded by Yahoo.
2006: Yahoo gave the project to Apache
Software Foundation.
GOOGLE ORIGINS
2003
2004
2006
SOME HADOOP
MILESTONES
• 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1 terabyte of data
in 209 seconds, compared to previous record of 297 seconds)
• 2009 - Avro and Chukwa became new members of Hadoop Framework
family
• 2010 - Hadoop's Hbase, Hive and Pig subprojects completed, adding more
computational power to Hadoop framework
• 2011 - ZooKeeper Completed
• 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha.
- Ambari, Cassandra, Mahout have been added
WHAT IS HADOOP?
• Hadoop:
• an open-source software framework that supports data-
intensive distributed applications, licensed under the Apache
v2 license.
• Goals / Requirements:
• Abstract and facilitate the storage and processing of large
and/or rapidly growing data sets
• Structured and non-structured data
• Simple programming models
• High scalability and availability
• Use commodity (cheap!) hardware with little redundancy
• Fault-tolerance
• Move computation rather than data
HADOOP FRAMEWORK
TOOLS
HADOOP ARCHITECTURE
• Distributed, with some centralization
• Main nodes of cluster are where most of the computational power and
storage of the system lies
• Main nodes run TaskTracker to accept and reply to MapReduce tasks,
and also DataNode to store needed blocks closely as possible
• Central control node runs NameNode to keep track of HDFS directories
& files, and JobTracker to dispatch compute tasks to TaskTracker
• Written in Java, also supports Python and Ruby
HADOOP ARCHITECTURE
HADOOP ARCHITECTURE
• Hadoop Distributed Filesystem
• Tailored to needs of MapReduce
• Targeted towards many reads of filestreams
• Writes are more costly
• High degree of data replication (3x by default)
• No need for RAID on normal nodes
• Large blocksize (64MB)
• Location awareness of DataNodes in network
HADOOP ARCHITECTURE
NameNode:
• Stores metadata for the files, like the directory structure of a
typical FS.
• The server holding the NameNode instance is quite crucial, as
there is only one.
• Transaction log for file deletes/adds, etc. Does not use
transactions for whole blocks or file-streams, only metadata.
• Handles creation of more replica blocks when necessary after a
DataNode failure
HADOOP ARCHITECTURE
DataNode:
• Stores the actual data in HDFS
• Can run on any underlying filesystem (ext3/4, NTFS, etc)
• Notifies NameNode of what blocks it has
• NameNode replicates blocks 2x in local rack, 1x elsewhere
HADOOP ARCHITECTURE
MAPREDUCE ENGINE
HADOOP ARCHITECTURE
HADOOP ARCHITECTURE
MapReduce Engine:
• JobTracker & TaskTracker
• JobTracker splits up data into smaller tasks(“Map”) and sends it to
the TaskTracker process in each node
• TaskTracker reports back to the JobTracker node and reports on
job progress, sends data (“Reduce”) or requests new jobs
HADOOP ARCHITECTURE
• None of these components are necessarily limited to using HDFS
• Many other distributed file-systems with quite different
architectures work
• Many other software packages besides Hadoop's MapReduce
platform make use of HDFS
HADOOP IN THE WILD
• Hadoop is in use at most organizations that handle big data:
o Yahoo!
o Facebook
o Amazon
o Netflix
o Etc…
• Some examples of scale:
o Yahoo!’s Search Webmap runs on 10,000 core Linux cluster
and powers Yahoo! Web search
o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012) &
growing at ½ PB/day (Nov, 2012)
HADOOP IN THE WILD
• Advertisement (Mining user behavior to generate
recommendations)
• Searches (group related documents)
• Security (search for uncommon patterns)
Three main applications of Hadoop:
HADOOP IN THE WILD:
FACEBOOK MESSAGES
• Design requirements:
o Integrate display of email, SMS and chat
messages between pairs and groups of
users
o Strong control over who users receive
messages from
o Suited for production use between 500
million people immediately after launch
o Stringent latency & uptime
requirements
HADOOP IN THE WILD
• System requirements
o High write throughput
o Cheap, elastic storage
o Low latency
o High consistency (within a
single data center good
enough)
o Disk-efficient sequential and
random read performance
HADOOP IN THE WILD
• Classic alternatives
o These requirements typically met using large MySQL cluster &
caching tiers using Memcached
o Content on HDFS could be loaded into MySQL or Memcached if
needed by web tier
• Problems with previous solutions
o MySQL has low random write throughput… BIG problem for
messaging!
o Difficult to scale MySQL clusters rapidly while maintaining
performance
o MySQL clusters have high management overhead, require more
expensive hardware
HADOOP IN THE WILD
• Facebook’s solution
o Hadoop + HBase as foundations
o Improve & adapt HDFS and HBase to scale to FB’s workload and
operational considerations
 Major concern was availability: NameNode is SPOF & failover times
are at least 20 minutes
 Proprietary “AvatarNode”: eliminates SPOF, makes HDFS safe to
deploy even with 24/7 uptime requirement
 Performance improvements for realtime workload: RPC timeout.
Rather fail fast and try a different DataNode
DATA NODE
▪ A Block Sever
▪ Stores data in local file system
▪ Stores meta-data of a block - checksum
▪ Serves data and meta-data to clients
▪ Block Report
▪ Periodically sends a report of all existing blocks to
NameNode
▪ Facilitate Pipelining of Data
▪ Forwards data to other specified DataNodes
BLOCK PLACEMENT
▪ Replication Strategy
▪ One replica on local node
▪ Second replica on a remote rack
▪ Third replica on same remote rack
▪ Additional replicas are randomly placed
▪ Clients read from nearest replica
DATA CORRECTNESS
▪ Use Checksums to validate data – CRC32
▪ File Creation
▪ Client computes checksum per 512 byte
▪ DataNode stores the checksum
▪ File Access
▪ Client retrieves the data and checksum from DataNode
▪ If validation fails, client tries other replicas
INTER PROCESS COMMUNICATION
IPC/RPC (ORG.APACHE.HADOOP.IPC)
▪ Protocol
▪ JobClient <-------------> JobTracker
▪ TaskTracker <------------> JobTracker
▪ TaskTracker <-------------> Child
▪ JobTracker impliments both protocol and works as server in
both IPC
▪ TaskTracker implements the TaskUmbilicalProtocol; Child gets
task information and reports task status through it.
JobSubmissionProtocol
InterTrackerProtocol
TaskUmbilicalProtocol
JOBCLIENT.SUBMITJOB - 1
▪ Check input and output, e.g. check if the output directory is already
existing
▪ job.getInputFormat().validateInput(job);
▪ job.getOutputFormat().checkOutputSpecs(fs, job);
▪ Get InputSplits, sort, and write output to HDFS
▪ InputSplit[] splits = job.getInputFormat().
getSplits(job, job.getNumMapTasks());
▪ writeSplitsFile(splits, out); // out is $SYSTEMDIR/$JOBID/job.split
JOBCLIENT.SUBMITJOB - 2
▪ The jar file and configuration file will be uploaded to HDFS system
directory
▪ job.write(out); // out is $SYSTEMDIR/$JOBID/job.xml
▪ JobStatus status = jobSubmitClient.submitJob(jobId);
▪ This is an RPC invocation, jobSubmitClient is a proxy created in the initialization
DATA PIPELINING
▪ Client retrieves a list of DataNodes on which to place replicas of a block
▪ Client writes block to the first DataNode
▪ The first DataNode forwards the data to the next DataNode in the
Pipeline
▪ When all replicas are written, the client moves on to write the next
block in file
HADOOP MAPREDUCE
▪ MapReduce programming model
▪ Framework for distributed processing of large data sets
▪ Pluggable user code runs in generic framework
▪ Common design pattern in data processing
▪ cat * | grep | sort | uniq -c | cat > file
▪ input | map | shuffle | reduce | output
MAPREDUCE USAGE
▪ Log processing
▪ Web search indexing
▪ Ad-hoc queries
CLOSER LOOK
▪ MapReduce Component
▪ JobClient
▪ JobTracker
▪ TaskTracker
▪ Child
▪ Job Creation/Execution Process
MAPREDCE PROCESS
(ORG.APACHE.HADOOP.MAPRED)
▪ JobClient
▪ Submit job
▪ JobTracker
▪ Manage and schedule job, split job into tasks
▪ TaskTracker
▪ Start and monitor the task execution
▪ Child
▪ The process that really execute the task
JOB INITIALIZATION ON JOBTRACKER - 1
▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request
▪ JobInProgress job = new JobInProgress(jobId, this, this.conf)
▪ Add the job into Job Queue
▪ jobs.put(job.getProfile().getJobId(), job);
▪ jobsByPriority.add(job);
▪ jobInitQueue.add(job);
JOB INITIALIZATION ON JOBTRACKER - 2
▪ Sort by priority
▪ resortPriority();
▪ compare the JobPrioity first, then compare the JobSubmissionTime
▪ Wake JobInitThread
▪ jobInitQueue.notifyall();
▪ job = jobInitQueue.remove(0);
▪ job.initTasks();
JOBINPROGRESS - 1
▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf
default_conf);
▪ JobInProgress.initTasks()
▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”)));
// mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
JOBINPROGRESS - 2
▪ splits = JobClient.readSplitFile(splitFile);
▪ numMapTasks = splits.length;
▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ JobStatus --> JobStatus.RUNNING
JOBTRACKER TASK SCHEDULING - 1
▪ Task getNewTaskForTaskTracker(String taskTracker)
▪ Compute the maximum tasks that can be running on taskTracker
▪ int maxCurrentMap Tasks = tts.getMaxMapTasks();
▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double)
remainingMapLoad/numTaskTrackers));
JOB INITIALIZATION ON JOBTRACKER - 1
▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request
▪ JobInProgress job = new JobInProgress(jobId, this, this.conf)
▪ Add the job into Job Queue
▪ jobs.put(job.getProfile().getJobId(), job);
▪ jobsByPriority.add(job);
▪ jobInitQueue.add(job);
JOB INITIALIZATION ON JOBTRACKER - 2
▪ Sort by priority
▪ resortPriority();
▪ compare the JobPrioity first, then compare the JobSubmissionTime
▪ Wake JobInitThread
▪ jobInitQueue.notifyall();
▪ job = jobInitQueue.remove(0);
▪ job.initTasks();
JOBINPROGRESS - 1
▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf
default_conf);
▪ JobInProgress.initTasks()
▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”)));
// mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
JOBINPROGRESS - 2
▪ splits = JobClient.readSplitFile(splitFile);
▪ numMapTasks = splits.length;
▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ JobStatus --> JobStatus.RUNNING
JOBTRACKER TASK SCHEDULING - 1
▪ Task getNewTaskForTaskTracker(String taskTracker)
▪ Compute the maximum tasks that can be running on taskTracker
▪ int maxCurrentMap Tasks = tts.getMaxMapTasks();
▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double)
remainingMapLoad/numTaskTrackers));
JOBTRACKER TASK SCHEDULING - 2
▪ int numMaps = tts.countMapTasks(); // running tasks number
▪ If numMaps < maxMapLoad, then more tasks can be allocated, then
based on priority, pick the first job from the jobsByPriority Queue,
create a task, and return to TaskTracker
▪ Task t = job.obtainNewMapTask(tts, numTaskTrackers);
START TASKTRACKER - 1
▪ initialize()
▪ Remove original local directory
▪ RPC initialization
▪ TaskReportServer = RPC.getServer(this, bindAddress, tmpPort, max, false, this, fConf);
▪ InterTrackerProtocol jobClient = (InterTrackerProtocol)
RPC.waitForProxy(InterTrackerProtocol.class, InterTrackerProtocol.versionID,
jobTrackAddr, this.fConf);
START TASKTRACKER - 2
▪ run();
▪ offerService();
▪ TaskTracker talks to JobTracker with HeartBeat message periodically
▪ HeatbeatResponse heartbeatResponse = transmitHeartBeat();
RUN TASK ON TASKTRACKER - 1
▪ TaskTracker.localizeJob(TaskInProgress tip);
▪ launchTasksForJob(tip, new JobConf(rjob.jobFile));
▪ tip.launchTask(); // TaskTracker.TaskInProgress
▪ tip.localizeTask(task); // create folder, symbol link
▪ runner = task.createRunner(TaskTracker.this);
▪ runner.start(); // start TaskRunner thread
RUN TASK ON TASKTRACKER - 2
▪ TaskRunner.run();
▪ Configure child process’ jvm parameters, i.e. classpath, taskid, taskReportServer’s
address & port
▪ Start Child Process
▪ runChild(wrappedCommand, workDir, taskid);
CHILD.MAIN()
▪ Create RPC Proxy, and execute RPC invocation
▪ TaskUmbilicalProtocol umbilical = (TaskUmbilicalProtocol)
RPC.getProxy(TaskUmbilicalProtocol.class, TaskUmbilicalProtocol.versionID,
address, defaultConf);
▪ Task task = umbilical.getTask(taskid);
▪ task.run(); // mapTask / reduceTask.run
FINISH JOB - 1
▪ Child
▪ task.done(umilical);
▪ RPC call: umbilical.done(taskId, shouldBePromoted)
▪ TaskTracker
▪ done(taskId, shouldPromote)
▪ TaskInProgress tip = tasks.get(taskid);
▪ tip.reportDone(shouldPromote);
▪ taskStatus.setRunState(TaskStatus.State.SUCCEEDED)
FINISH JOB - 2
▪ JobTracker
▪ TaskStatus report: status.getTaskReports();
▪ TaskInProgress tip = taskidToTIPMap.get(taskId);
▪ JobInProgress update JobStatus
▪ tip.getJob().updateTaskStatus(tip, report, myMetrics);
▪ One task of current job is finished
▪ completedTask(tip, taskStatus, metrics);
▪ If (this.status.getRunState() == JobStatus.RUNNING && allDone)
{this.status.setRunState(JobStatus.SUCCEEDED)}
RESULT
▪ Word Count
▪ hadoop jar hadoop-0.20.2-examples.jar wordcount <input dir> <output dir>
▪ Hive
▪ hive -f pagerank.hive
THANK YOU
Contact : Knowledgebee@beenovo.com
Ad

More Related Content

What's hot (19)

What is hadoop
What is hadoopWhat is hadoop
What is hadoop
Asis Mohanty
 
Top Hadoop Big Data Interview Questions and Answers for Fresher
Top Hadoop Big Data Interview Questions and Answers for FresherTop Hadoop Big Data Interview Questions and Answers for Fresher
Top Hadoop Big Data Interview Questions and Answers for Fresher
JanBask Training
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
eakasit_dpu
 
Emergent Distributed Data Storage
Emergent Distributed Data StorageEmergent Distributed Data Storage
Emergent Distributed Data Storage
hybrid cloud
 
Understanding Big Data And Hadoop
Understanding Big Data And HadoopUnderstanding Big Data And Hadoop
Understanding Big Data And Hadoop
Edureka!
 
20100806 cloudera 10 hadoopable problems webinar
20100806 cloudera 10 hadoopable problems webinar20100806 cloudera 10 hadoopable problems webinar
20100806 cloudera 10 hadoopable problems webinar
Cloudera, Inc.
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
Flavio Vit
 
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Renato Bonomini
 
Introduction to Big Data Analytics on Apache Hadoop
Introduction to Big Data Analytics on Apache HadoopIntroduction to Big Data Analytics on Apache Hadoop
Introduction to Big Data Analytics on Apache Hadoop
Avkash Chauhan
 
Hadoop introduction
Hadoop introductionHadoop introduction
Hadoop introduction
musrath mohammad
 
Introduction to Apache Hadoop Eco-System
Introduction to Apache Hadoop Eco-SystemIntroduction to Apache Hadoop Eco-System
Introduction to Apache Hadoop Eco-System
Md. Hasan Basri (Angel)
 
Hadoop and Hive in Enterprises
Hadoop and Hive in EnterprisesHadoop and Hive in Enterprises
Hadoop and Hive in Enterprises
markgrover
 
Hadoop and BigData - July 2016
Hadoop and BigData - July 2016Hadoop and BigData - July 2016
Hadoop and BigData - July 2016
Ranjith Sekar
 
Big data technologies and Hadoop infrastructure
Big data technologies and Hadoop infrastructureBig data technologies and Hadoop infrastructure
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
Big Data and Hadoop Introduction
 Big Data and Hadoop Introduction Big Data and Hadoop Introduction
Big Data and Hadoop Introduction
Dzung Nguyen
 
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Sumeet Singh
 
Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...
Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...
Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...
Cloudera, Inc.
 
Big data with Hadoop - Introduction
Big data with Hadoop - IntroductionBig data with Hadoop - Introduction
Big data with Hadoop - Introduction
Tomy Rhymond
 
Hadoop tools with Examples
Hadoop tools with ExamplesHadoop tools with Examples
Hadoop tools with Examples
Joe McTee
 
Top Hadoop Big Data Interview Questions and Answers for Fresher
Top Hadoop Big Data Interview Questions and Answers for FresherTop Hadoop Big Data Interview Questions and Answers for Fresher
Top Hadoop Big Data Interview Questions and Answers for Fresher
JanBask Training
 
Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
eakasit_dpu
 
Emergent Distributed Data Storage
Emergent Distributed Data StorageEmergent Distributed Data Storage
Emergent Distributed Data Storage
hybrid cloud
 
Understanding Big Data And Hadoop
Understanding Big Data And HadoopUnderstanding Big Data And Hadoop
Understanding Big Data And Hadoop
Edureka!
 
20100806 cloudera 10 hadoopable problems webinar
20100806 cloudera 10 hadoopable problems webinar20100806 cloudera 10 hadoopable problems webinar
20100806 cloudera 10 hadoopable problems webinar
Cloudera, Inc.
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
Flavio Vit
 
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Capacity Management and BigData/Hadoop - Hitchhiker's guide for the Capacity ...
Renato Bonomini
 
Introduction to Big Data Analytics on Apache Hadoop
Introduction to Big Data Analytics on Apache HadoopIntroduction to Big Data Analytics on Apache Hadoop
Introduction to Big Data Analytics on Apache Hadoop
Avkash Chauhan
 
Introduction to Apache Hadoop Eco-System
Introduction to Apache Hadoop Eco-SystemIntroduction to Apache Hadoop Eco-System
Introduction to Apache Hadoop Eco-System
Md. Hasan Basri (Angel)
 
Hadoop and Hive in Enterprises
Hadoop and Hive in EnterprisesHadoop and Hive in Enterprises
Hadoop and Hive in Enterprises
markgrover
 
Hadoop and BigData - July 2016
Hadoop and BigData - July 2016Hadoop and BigData - July 2016
Hadoop and BigData - July 2016
Ranjith Sekar
 
Big data technologies and Hadoop infrastructure
Big data technologies and Hadoop infrastructureBig data technologies and Hadoop infrastructure
Big data technologies and Hadoop infrastructure
Roman Nikitchenko
 
Big Data and Hadoop Introduction
 Big Data and Hadoop Introduction Big Data and Hadoop Introduction
Big Data and Hadoop Introduction
Dzung Nguyen
 
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Hadoop Summit Dublin 2016: Hadoop Platform at Yahoo - A Year in Review
Sumeet Singh
 
Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...
Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...
Strata + Hadoop World 2012: Data Science on Hadoop: How Cloudera Impala Unloc...
Cloudera, Inc.
 
Big data with Hadoop - Introduction
Big data with Hadoop - IntroductionBig data with Hadoop - Introduction
Big data with Hadoop - Introduction
Tomy Rhymond
 
Hadoop tools with Examples
Hadoop tools with ExamplesHadoop tools with Examples
Hadoop tools with Examples
Joe McTee
 

Similar to Introduction to Hadoop Administration (20)

Introduction to Hadoop Administration
Introduction to Hadoop AdministrationIntroduction to Hadoop Administration
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
Hadoop ppt1
Hadoop ppt1Hadoop ppt1
Hadoop ppt1
chariorienit
 
Hadoop.pptx
Hadoop.pptxHadoop.pptx
Hadoop.pptx
arslanhaneef
 
Hadoop.pptx
Hadoop.pptxHadoop.pptx
Hadoop.pptx
sonukumar379092
 
List of Engineering Colleges in Uttarakhand
List of Engineering Colleges in UttarakhandList of Engineering Colleges in Uttarakhand
List of Engineering Colleges in Uttarakhand
Roorkee College of Engineering, Roorkee
 
Hadoop training in bangalore
Hadoop training in bangaloreHadoop training in bangalore
Hadoop training in bangalore
Kelly Technologies
 
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
Venneladonthireddy1
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
MaharajothiP
 
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for womenHadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
maharajothip1
 
Hadoop ppt on the basics and architecture
Hadoop ppt on the basics and architectureHadoop ppt on the basics and architecture
Hadoop ppt on the basics and architecture
saipriyacoool
 
Introduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopIntroduction to BIg Data and Hadoop
Introduction to BIg Data and Hadoop
Amir Shaikh
 
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
 
Hadoo its a good pdf to read some notes p.pptx
Hadoo its a good pdf to read some notes p.pptxHadoo its a good pdf to read some notes p.pptx
Hadoo its a good pdf to read some notes p.pptx
helloworldw793
 
Hadoop Distributed File System
Hadoop Distributed File SystemHadoop Distributed File System
Hadoop Distributed File System
elliando dias
 
Big data and hadoop anupama
Big data and hadoop anupamaBig data and hadoop anupama
Big data and hadoop anupama
Anupama Prabhudesai
 
Intro to Apache Hadoop
Intro to Apache HadoopIntro to Apache Hadoop
Intro to Apache Hadoop
Sufi Nawaz
 
Big data applications
Big data applicationsBig data applications
Big data applications
Juan Pablo Paz Grau, Ph.D., PMP
 
Hadoop Primer
Hadoop PrimerHadoop Primer
Hadoop Primer
Steve Staso
 
Big data processing using hadoop poster presentation
Big data processing using hadoop poster presentationBig data processing using hadoop poster presentation
Big data processing using hadoop poster presentation
Amrut Patil
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introduction
Sandeep Singh
 
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
Venneladonthireddy1
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
MaharajothiP
 
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for womenHadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
maharajothip1
 
Hadoop ppt on the basics and architecture
Hadoop ppt on the basics and architectureHadoop ppt on the basics and architecture
Hadoop ppt on the basics and architecture
saipriyacoool
 
Introduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopIntroduction to BIg Data and Hadoop
Introduction to BIg Data and Hadoop
Amir Shaikh
 
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
 
Hadoo its a good pdf to read some notes p.pptx
Hadoo its a good pdf to read some notes p.pptxHadoo its a good pdf to read some notes p.pptx
Hadoo its a good pdf to read some notes p.pptx
helloworldw793
 
Hadoop Distributed File System
Hadoop Distributed File SystemHadoop Distributed File System
Hadoop Distributed File System
elliando dias
 
Intro to Apache Hadoop
Intro to Apache HadoopIntro to Apache Hadoop
Intro to Apache Hadoop
Sufi Nawaz
 
Big data processing using hadoop poster presentation
Big data processing using hadoop poster presentationBig data processing using hadoop poster presentation
Big data processing using hadoop poster presentation
Amrut Patil
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introduction
Sandeep Singh
 
Ad

More from Ramesh Pabba - seeking new projects (8)

Devops &amp; linux administration
Devops &amp; linux administrationDevops &amp; linux administration
Devops &amp; linux administration
Ramesh Pabba - seeking new projects
 
Devops Online Training - Edubodhi
Devops Online Training - EdubodhiDevops Online Training - Edubodhi
Devops Online Training - Edubodhi
Ramesh Pabba - seeking new projects
 
Prince2@ Foundation Certification Course
Prince2@ Foundation Certification CoursePrince2@ Foundation Certification Course
Prince2@ Foundation Certification Course
Ramesh Pabba - seeking new projects
 
Oracle Data integrator 11g (ODI) - Online Training Course
Oracle Data integrator 11g (ODI) - Online Training Course Oracle Data integrator 11g (ODI) - Online Training Course
Oracle Data integrator 11g (ODI) - Online Training Course
Ramesh Pabba - seeking new projects
 
Oracle BI apps Online Training
Oracle BI apps Online TrainingOracle BI apps Online Training
Oracle BI apps Online Training
Ramesh Pabba - seeking new projects
 
Oracle BI Apps training
Oracle BI Apps  trainingOracle BI Apps  training
Oracle BI Apps training
Ramesh Pabba - seeking new projects
 
Data Visualization with Tableau - by Knowledgebee Trainings
Data Visualization with Tableau - by Knowledgebee TrainingsData Visualization with Tableau - by Knowledgebee Trainings
Data Visualization with Tableau - by Knowledgebee Trainings
Ramesh Pabba - seeking new projects
 
Introduction to Hadoop Administration
Introduction to Hadoop AdministrationIntroduction to Hadoop Administration
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
Ad

Recently uploaded (20)

Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]
saniaaftab72555
 
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Lionel Briand
 
Adobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest VersionAdobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest Version
kashifyounis067
 
Explaining GitHub Actions Failures with Large Language Models Challenges, In...
Explaining GitHub Actions Failures with Large Language Models Challenges, In...Explaining GitHub Actions Failures with Large Language Models Challenges, In...
Explaining GitHub Actions Failures with Large Language Models Challenges, In...
ssuserb14185
 
Meet the Agents: How AI Is Learning to Think, Plan, and Collaborate
Meet the Agents: How AI Is Learning to Think, Plan, and CollaborateMeet the Agents: How AI Is Learning to Think, Plan, and Collaborate
Meet the Agents: How AI Is Learning to Think, Plan, and Collaborate
Maxim Salnikov
 
Landscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature ReviewLandscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature Review
Hironori Washizaki
 
How can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptxHow can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptx
laravinson24
 
Automation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath CertificateAutomation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath Certificate
VICTOR MAESTRE RAMIREZ
 
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)What Do Contribution Guidelines Say About Software Testing? (MSR 2025)
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)
Andre Hora
 
Download Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With LatestDownload Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With Latest
tahirabibi60507
 
Exploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the FutureExploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the Future
ICS
 
Exploring Code Comprehension in Scientific Programming: Preliminary Insight...
Exploring Code Comprehension  in Scientific Programming:  Preliminary Insight...Exploring Code Comprehension  in Scientific Programming:  Preliminary Insight...
Exploring Code Comprehension in Scientific Programming: Preliminary Insight...
University of Hawai‘i at Mānoa
 
How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?
How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?
How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?
steaveroggers
 
Download YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full ActivatedDownload YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full Activated
saniamalik72555
 
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...
Ranjan Baisak
 
Societal challenges of AI: biases, multilinguism and sustainability
Societal challenges of AI: biases, multilinguism and sustainabilitySocietal challenges of AI: biases, multilinguism and sustainability
Societal challenges of AI: biases, multilinguism and sustainability
Jordi Cabot
 
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRYLEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
NidaFarooq10
 
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Orangescrum
 
PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025
mu394968
 
Not So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java WebinarNot So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java Webinar
Tier1 app
 
Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]Get & Download Wondershare Filmora Crack Latest [2025]
Get & Download Wondershare Filmora Crack Latest [2025]
saniaaftab72555
 
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Requirements in Engineering AI- Enabled Systems: Open Problems and Safe AI Sy...
Lionel Briand
 
Adobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest VersionAdobe Illustrator Crack FREE Download 2025 Latest Version
Adobe Illustrator Crack FREE Download 2025 Latest Version
kashifyounis067
 
Explaining GitHub Actions Failures with Large Language Models Challenges, In...
Explaining GitHub Actions Failures with Large Language Models Challenges, In...Explaining GitHub Actions Failures with Large Language Models Challenges, In...
Explaining GitHub Actions Failures with Large Language Models Challenges, In...
ssuserb14185
 
Meet the Agents: How AI Is Learning to Think, Plan, and Collaborate
Meet the Agents: How AI Is Learning to Think, Plan, and CollaborateMeet the Agents: How AI Is Learning to Think, Plan, and Collaborate
Meet the Agents: How AI Is Learning to Think, Plan, and Collaborate
Maxim Salnikov
 
Landscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature ReviewLandscape of Requirements Engineering for/by AI through Literature Review
Landscape of Requirements Engineering for/by AI through Literature Review
Hironori Washizaki
 
How can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptxHow can one start with crypto wallet development.pptx
How can one start with crypto wallet development.pptx
laravinson24
 
Automation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath CertificateAutomation Techniques in RPA - UiPath Certificate
Automation Techniques in RPA - UiPath Certificate
VICTOR MAESTRE RAMIREZ
 
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)What Do Contribution Guidelines Say About Software Testing? (MSR 2025)
What Do Contribution Guidelines Say About Software Testing? (MSR 2025)
Andre Hora
 
Download Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With LatestDownload Wondershare Filmora Crack [2025] With Latest
Download Wondershare Filmora Crack [2025] With Latest
tahirabibi60507
 
Exploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the FutureExploring Wayland: A Modern Display Server for the Future
Exploring Wayland: A Modern Display Server for the Future
ICS
 
Exploring Code Comprehension in Scientific Programming: Preliminary Insight...
Exploring Code Comprehension  in Scientific Programming:  Preliminary Insight...Exploring Code Comprehension  in Scientific Programming:  Preliminary Insight...
Exploring Code Comprehension in Scientific Programming: Preliminary Insight...
University of Hawai‘i at Mānoa
 
How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?
How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?
How to Batch Export Lotus Notes NSF Emails to Outlook PST Easily?
steaveroggers
 
Download YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full ActivatedDownload YouTube By Click 2025 Free Full Activated
Download YouTube By Click 2025 Free Full Activated
saniamalik72555
 
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...
Ranjan Baisak
 
Societal challenges of AI: biases, multilinguism and sustainability
Societal challenges of AI: biases, multilinguism and sustainabilitySocietal challenges of AI: biases, multilinguism and sustainability
Societal challenges of AI: biases, multilinguism and sustainability
Jordi Cabot
 
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRYLEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
LEARN SEO AND INCREASE YOUR KNOWLDGE IN SOFTWARE INDUSTRY
NidaFarooq10
 
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025Why Orangescrum Is a Game Changer for Construction Companies in 2025
Why Orangescrum Is a Game Changer for Construction Companies in 2025
Orangescrum
 
PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025PDF Reader Pro Crack Latest Version FREE Download 2025
PDF Reader Pro Crack Latest Version FREE Download 2025
mu394968
 
Not So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java WebinarNot So Common Memory Leaks in Java Webinar
Not So Common Memory Leaks in Java Webinar
Tier1 app
 

Introduction to Hadoop Administration

  • 2. HADOOP A DISTRIBUTED FILE SYSTEM 1. Introduction: Hadoop’s history and advantages 2. Architecture in detail 3. Hadoop in industry
  • 3. Apache top level project, open-source implementation of frameworks for reliable, scalable, distributed computing and data storage. It is a flexible and highly-available architecture for large scale computation and data processing on a network of commodity hardware.
  • 4. BRIEF HISTORY OF HADOOP Designed to answer the question: “How to process big data with reasonable cost and time?”
  • 5. SEARCH ENGINES IN 1990 1997 1996
  • 7. Doug Cutting 2005: Doug Cutting and Michael J. Cafarella developed Hadoop to support distribution for the Nutch search engine project. The project was funded by Yahoo. 2006: Yahoo gave the project to Apache Software Foundation.
  • 9. SOME HADOOP MILESTONES • 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1 terabyte of data in 209 seconds, compared to previous record of 297 seconds) • 2009 - Avro and Chukwa became new members of Hadoop Framework family • 2010 - Hadoop's Hbase, Hive and Pig subprojects completed, adding more computational power to Hadoop framework • 2011 - ZooKeeper Completed • 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha. - Ambari, Cassandra, Mahout have been added
  • 10. WHAT IS HADOOP? • Hadoop: • an open-source software framework that supports data- intensive distributed applications, licensed under the Apache v2 license. • Goals / Requirements: • Abstract and facilitate the storage and processing of large and/or rapidly growing data sets • Structured and non-structured data • Simple programming models • High scalability and availability • Use commodity (cheap!) hardware with little redundancy • Fault-tolerance • Move computation rather than data
  • 12. HADOOP ARCHITECTURE • Distributed, with some centralization • Main nodes of cluster are where most of the computational power and storage of the system lies • Main nodes run TaskTracker to accept and reply to MapReduce tasks, and also DataNode to store needed blocks closely as possible • Central control node runs NameNode to keep track of HDFS directories & files, and JobTracker to dispatch compute tasks to TaskTracker • Written in Java, also supports Python and Ruby
  • 14. HADOOP ARCHITECTURE • Hadoop Distributed Filesystem • Tailored to needs of MapReduce • Targeted towards many reads of filestreams • Writes are more costly • High degree of data replication (3x by default) • No need for RAID on normal nodes • Large blocksize (64MB) • Location awareness of DataNodes in network
  • 15. HADOOP ARCHITECTURE NameNode: • Stores metadata for the files, like the directory structure of a typical FS. • The server holding the NameNode instance is quite crucial, as there is only one. • Transaction log for file deletes/adds, etc. Does not use transactions for whole blocks or file-streams, only metadata. • Handles creation of more replica blocks when necessary after a DataNode failure
  • 16. HADOOP ARCHITECTURE DataNode: • Stores the actual data in HDFS • Can run on any underlying filesystem (ext3/4, NTFS, etc) • Notifies NameNode of what blocks it has • NameNode replicates blocks 2x in local rack, 1x elsewhere
  • 19. HADOOP ARCHITECTURE MapReduce Engine: • JobTracker & TaskTracker • JobTracker splits up data into smaller tasks(“Map”) and sends it to the TaskTracker process in each node • TaskTracker reports back to the JobTracker node and reports on job progress, sends data (“Reduce”) or requests new jobs
  • 20. HADOOP ARCHITECTURE • None of these components are necessarily limited to using HDFS • Many other distributed file-systems with quite different architectures work • Many other software packages besides Hadoop's MapReduce platform make use of HDFS
  • 21. HADOOP IN THE WILD • Hadoop is in use at most organizations that handle big data: o Yahoo! o Facebook o Amazon o Netflix o Etc… • Some examples of scale: o Yahoo!’s Search Webmap runs on 10,000 core Linux cluster and powers Yahoo! Web search o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012) & growing at ½ PB/day (Nov, 2012)
  • 22. HADOOP IN THE WILD • Advertisement (Mining user behavior to generate recommendations) • Searches (group related documents) • Security (search for uncommon patterns) Three main applications of Hadoop:
  • 23. HADOOP IN THE WILD: FACEBOOK MESSAGES • Design requirements: o Integrate display of email, SMS and chat messages between pairs and groups of users o Strong control over who users receive messages from o Suited for production use between 500 million people immediately after launch o Stringent latency & uptime requirements
  • 24. HADOOP IN THE WILD • System requirements o High write throughput o Cheap, elastic storage o Low latency o High consistency (within a single data center good enough) o Disk-efficient sequential and random read performance
  • 25. HADOOP IN THE WILD • Classic alternatives o These requirements typically met using large MySQL cluster & caching tiers using Memcached o Content on HDFS could be loaded into MySQL or Memcached if needed by web tier • Problems with previous solutions o MySQL has low random write throughput… BIG problem for messaging! o Difficult to scale MySQL clusters rapidly while maintaining performance o MySQL clusters have high management overhead, require more expensive hardware
  • 26. HADOOP IN THE WILD • Facebook’s solution o Hadoop + HBase as foundations o Improve & adapt HDFS and HBase to scale to FB’s workload and operational considerations  Major concern was availability: NameNode is SPOF & failover times are at least 20 minutes  Proprietary “AvatarNode”: eliminates SPOF, makes HDFS safe to deploy even with 24/7 uptime requirement  Performance improvements for realtime workload: RPC timeout. Rather fail fast and try a different DataNode
  • 27. DATA NODE ▪ A Block Sever ▪ Stores data in local file system ▪ Stores meta-data of a block - checksum ▪ Serves data and meta-data to clients ▪ Block Report ▪ Periodically sends a report of all existing blocks to NameNode ▪ Facilitate Pipelining of Data ▪ Forwards data to other specified DataNodes
  • 28. BLOCK PLACEMENT ▪ Replication Strategy ▪ One replica on local node ▪ Second replica on a remote rack ▪ Third replica on same remote rack ▪ Additional replicas are randomly placed ▪ Clients read from nearest replica
  • 29. DATA CORRECTNESS ▪ Use Checksums to validate data – CRC32 ▪ File Creation ▪ Client computes checksum per 512 byte ▪ DataNode stores the checksum ▪ File Access ▪ Client retrieves the data and checksum from DataNode ▪ If validation fails, client tries other replicas
  • 30. INTER PROCESS COMMUNICATION IPC/RPC (ORG.APACHE.HADOOP.IPC) ▪ Protocol ▪ JobClient <-------------> JobTracker ▪ TaskTracker <------------> JobTracker ▪ TaskTracker <-------------> Child ▪ JobTracker impliments both protocol and works as server in both IPC ▪ TaskTracker implements the TaskUmbilicalProtocol; Child gets task information and reports task status through it. JobSubmissionProtocol InterTrackerProtocol TaskUmbilicalProtocol
  • 31. JOBCLIENT.SUBMITJOB - 1 ▪ Check input and output, e.g. check if the output directory is already existing ▪ job.getInputFormat().validateInput(job); ▪ job.getOutputFormat().checkOutputSpecs(fs, job); ▪ Get InputSplits, sort, and write output to HDFS ▪ InputSplit[] splits = job.getInputFormat(). getSplits(job, job.getNumMapTasks()); ▪ writeSplitsFile(splits, out); // out is $SYSTEMDIR/$JOBID/job.split
  • 32. JOBCLIENT.SUBMITJOB - 2 ▪ The jar file and configuration file will be uploaded to HDFS system directory ▪ job.write(out); // out is $SYSTEMDIR/$JOBID/job.xml ▪ JobStatus status = jobSubmitClient.submitJob(jobId); ▪ This is an RPC invocation, jobSubmitClient is a proxy created in the initialization
  • 33. DATA PIPELINING ▪ Client retrieves a list of DataNodes on which to place replicas of a block ▪ Client writes block to the first DataNode ▪ The first DataNode forwards the data to the next DataNode in the Pipeline ▪ When all replicas are written, the client moves on to write the next block in file
  • 34. HADOOP MAPREDUCE ▪ MapReduce programming model ▪ Framework for distributed processing of large data sets ▪ Pluggable user code runs in generic framework ▪ Common design pattern in data processing ▪ cat * | grep | sort | uniq -c | cat > file ▪ input | map | shuffle | reduce | output
  • 35. MAPREDUCE USAGE ▪ Log processing ▪ Web search indexing ▪ Ad-hoc queries
  • 36. CLOSER LOOK ▪ MapReduce Component ▪ JobClient ▪ JobTracker ▪ TaskTracker ▪ Child ▪ Job Creation/Execution Process
  • 37. MAPREDCE PROCESS (ORG.APACHE.HADOOP.MAPRED) ▪ JobClient ▪ Submit job ▪ JobTracker ▪ Manage and schedule job, split job into tasks ▪ TaskTracker ▪ Start and monitor the task execution ▪ Child ▪ The process that really execute the task
  • 38. JOB INITIALIZATION ON JOBTRACKER - 1 ▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request ▪ JobInProgress job = new JobInProgress(jobId, this, this.conf) ▪ Add the job into Job Queue ▪ jobs.put(job.getProfile().getJobId(), job); ▪ jobsByPriority.add(job); ▪ jobInitQueue.add(job);
  • 39. JOB INITIALIZATION ON JOBTRACKER - 2 ▪ Sort by priority ▪ resortPriority(); ▪ compare the JobPrioity first, then compare the JobSubmissionTime ▪ Wake JobInitThread ▪ jobInitQueue.notifyall(); ▪ job = jobInitQueue.remove(0); ▪ job.initTasks();
  • 40. JOBINPROGRESS - 1 ▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf default_conf); ▪ JobInProgress.initTasks() ▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”))); // mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
  • 41. JOBINPROGRESS - 2 ▪ splits = JobClient.readSplitFile(splitFile); ▪ numMapTasks = splits.length; ▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ JobStatus --> JobStatus.RUNNING
  • 42. JOBTRACKER TASK SCHEDULING - 1 ▪ Task getNewTaskForTaskTracker(String taskTracker) ▪ Compute the maximum tasks that can be running on taskTracker ▪ int maxCurrentMap Tasks = tts.getMaxMapTasks(); ▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double) remainingMapLoad/numTaskTrackers));
  • 43. JOB INITIALIZATION ON JOBTRACKER - 1 ▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request ▪ JobInProgress job = new JobInProgress(jobId, this, this.conf) ▪ Add the job into Job Queue ▪ jobs.put(job.getProfile().getJobId(), job); ▪ jobsByPriority.add(job); ▪ jobInitQueue.add(job);
  • 44. JOB INITIALIZATION ON JOBTRACKER - 2 ▪ Sort by priority ▪ resortPriority(); ▪ compare the JobPrioity first, then compare the JobSubmissionTime ▪ Wake JobInitThread ▪ jobInitQueue.notifyall(); ▪ job = jobInitQueue.remove(0); ▪ job.initTasks();
  • 45. JOBINPROGRESS - 1 ▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf default_conf); ▪ JobInProgress.initTasks() ▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”))); // mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
  • 46. JOBINPROGRESS - 2 ▪ splits = JobClient.readSplitFile(splitFile); ▪ numMapTasks = splits.length; ▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ JobStatus --> JobStatus.RUNNING
  • 47. JOBTRACKER TASK SCHEDULING - 1 ▪ Task getNewTaskForTaskTracker(String taskTracker) ▪ Compute the maximum tasks that can be running on taskTracker ▪ int maxCurrentMap Tasks = tts.getMaxMapTasks(); ▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double) remainingMapLoad/numTaskTrackers));
  • 48. JOBTRACKER TASK SCHEDULING - 2 ▪ int numMaps = tts.countMapTasks(); // running tasks number ▪ If numMaps < maxMapLoad, then more tasks can be allocated, then based on priority, pick the first job from the jobsByPriority Queue, create a task, and return to TaskTracker ▪ Task t = job.obtainNewMapTask(tts, numTaskTrackers);
  • 49. START TASKTRACKER - 1 ▪ initialize() ▪ Remove original local directory ▪ RPC initialization ▪ TaskReportServer = RPC.getServer(this, bindAddress, tmpPort, max, false, this, fConf); ▪ InterTrackerProtocol jobClient = (InterTrackerProtocol) RPC.waitForProxy(InterTrackerProtocol.class, InterTrackerProtocol.versionID, jobTrackAddr, this.fConf);
  • 50. START TASKTRACKER - 2 ▪ run(); ▪ offerService(); ▪ TaskTracker talks to JobTracker with HeartBeat message periodically ▪ HeatbeatResponse heartbeatResponse = transmitHeartBeat();
  • 51. RUN TASK ON TASKTRACKER - 1 ▪ TaskTracker.localizeJob(TaskInProgress tip); ▪ launchTasksForJob(tip, new JobConf(rjob.jobFile)); ▪ tip.launchTask(); // TaskTracker.TaskInProgress ▪ tip.localizeTask(task); // create folder, symbol link ▪ runner = task.createRunner(TaskTracker.this); ▪ runner.start(); // start TaskRunner thread
  • 52. RUN TASK ON TASKTRACKER - 2 ▪ TaskRunner.run(); ▪ Configure child process’ jvm parameters, i.e. classpath, taskid, taskReportServer’s address & port ▪ Start Child Process ▪ runChild(wrappedCommand, workDir, taskid);
  • 53. CHILD.MAIN() ▪ Create RPC Proxy, and execute RPC invocation ▪ TaskUmbilicalProtocol umbilical = (TaskUmbilicalProtocol) RPC.getProxy(TaskUmbilicalProtocol.class, TaskUmbilicalProtocol.versionID, address, defaultConf); ▪ Task task = umbilical.getTask(taskid); ▪ task.run(); // mapTask / reduceTask.run
  • 54. FINISH JOB - 1 ▪ Child ▪ task.done(umilical); ▪ RPC call: umbilical.done(taskId, shouldBePromoted) ▪ TaskTracker ▪ done(taskId, shouldPromote) ▪ TaskInProgress tip = tasks.get(taskid); ▪ tip.reportDone(shouldPromote); ▪ taskStatus.setRunState(TaskStatus.State.SUCCEEDED)
  • 55. FINISH JOB - 2 ▪ JobTracker ▪ TaskStatus report: status.getTaskReports(); ▪ TaskInProgress tip = taskidToTIPMap.get(taskId); ▪ JobInProgress update JobStatus ▪ tip.getJob().updateTaskStatus(tip, report, myMetrics); ▪ One task of current job is finished ▪ completedTask(tip, taskStatus, metrics); ▪ If (this.status.getRunState() == JobStatus.RUNNING && allDone) {this.status.setRunState(JobStatus.SUCCEEDED)}
  • 56. RESULT ▪ Word Count ▪ hadoop jar hadoop-0.20.2-examples.jar wordcount <input dir> <output dir> ▪ Hive ▪ hive -f pagerank.hive