24 Hadoop Interview Questions & Answers For MapReduce Developers - FromDev
24 Hadoop Interview Questions & Answers For MapReduce Developers - FromDev
FROMDEV
24
Hadoop Interview Questions & Answers for MapReduce developers
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
A good understanding of Hadoop Architecture is required to leverage the power of Hadoop. Below are few important
practical questions which can be asked to a Senior Experienced Hadoop Developer in an interview. I learned the
answers to them during my CCHD (Cloudera Certified Haddop Developer) certification. I hope you will find them
useful. This list primarily includes questions related to Hadoop Architecture, MapReduce, Hadoop API and Hadoop
Distributed File System (HDFS).
Hadoop is the most popular platform for big data analysis. The Hadoop ecosystem is huge and involves many
supporting frameworks and tools to effectively run and manage it. This article focuses on the core of Hadoop concepts and its technique to handle
enormous data.
Below list of hadoop interview questions and answers that may prove useful for beginners and experts alike. These are common set of questions
that you may face at big data job interview or a hadoop certification exam (like CCHD).
What is a JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?
JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any
hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is
configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running
jobs are halted. JobTracker in Hadoop performs following actions(from Hadoop Wiki:)
Client applications submit jobs to the Job tracker.
The JobTracker talks to the NameNode to determine the location of the data
The JobTracker locates TaskTracker nodes with available slots at or near the data
The JobTracker submits the work to the chosen TaskTracker nodes.
The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the
work is scheduled on a different TaskTracker.
A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere,
it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
When the work is completed, the JobTracker updates its status.
What is a Task Tracker in Hadoop? How many instances of TaskTracker run on a Hadoop Cluster
A TaskTracker is a slave node daemon in the cluster that accepts tasks (Map, Reduce and Shuffle operations) from a JobTracker. There is only
One Task Tracker process run on any hadoop slave node. Task Tracker runs on its own JVM process. Every TaskTracker is configured with a set
of slots, these indicate the number of tasks that it can accept. The TaskTracker starts a separate JVM processes to do the actual work (called as
Task Instance) this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these task instances, capturing
the output and exit codes. When the Task instances finish, successfully or not, the task tracker notifies the JobTracker. The TaskTrackers also
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
1/7
01/08/2015
send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also
inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated.
What is configuration of a typical slave node on Hadoop cluster? How many JVMs run on a slave node?
Single instance of a Task Tracker is run on each Slave node. Task tracker is run as a separate JVM process.
Single instance of a DataNode daemon is run on each Slave node. DataNode daemon is run as a separate JVM process.
One or Multiple instances of Task Instance is run on each slave node. Each task instance is run as a separate JVM process. The
number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instances.
Does MapReduce programming model provide a way for reducers to communicate with each other? In a MapReduce job can a reducer
communicate with another reducer?
Nope, MapReduce programming model does not allow reducers to communicate with each other. Reducers run in isolation.
2/7
01/08/2015
completes.
What is the Hadoop MapReduce API contract for a key and value Class?
The Key must implement the org.apache.hadoop.io.WritableComparable interface.
The value must implement the org.apache.hadoop.io.Writable interface.
If reducers do not start before all mappers finish then why does the progress on MapReduce job shows something like Map(50%)
Reduce(10%)? Why reducers progress percentage is displayed when mapper is not finished yet?
Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The progress calculation also takes in
account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any
intermediate key-value pair for a mapper is available to be transferred to reducer. Though the reducer progress is updated still the programmer
defined reduce method is called only after all the mappers have finished.
3/7
01/08/2015
HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets.
These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming
speeds. HDFS supports write-once-read-many semantics on files.
What is HDFS Block size? How is it different from traditional file system block size?
In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is typically 64Mb or 128Mb in size. Each block is
replicated multiple times. Default is to replicate each block three times. Replicas are stored on different nodes. HDFS utilizes the local file system
to store each HDFS block as a separate file. HDFS Block size can not be compared with the traditional file system block size.
Can you think of a questions which is not part of this post? Please don't forget to share it with me in comments section & I will try to include it in the
list.
Posted by Sachin FromDev
POST A COMMENT
DEFAULT COMMENTS
FACEBOOK COMMENTS
222 comments
Top comments
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
4/7
01/08/2015
Some
of the useful questions I documented for screening of the hadoop developers. I hope you will
find these useful.
In general, My opinion is - its more important to ask fundamental questions about the hadoop
ecosystem and the distributed computing approach instead of jumping onto the Map reduce
problems.
+6
1
7
View all 4 replies
Bhaskar Karambelkar 2 years ago
Hey Sachin, We're currently upgrading from CDH 4.2.1 to CDH4.3.0.
Cloudera added HA with CDH4, release. And IFAIK MapR as well as Hortonworks also support HA.
Sachin FromDev 2 years ago
Wow thats gud to know. Time for me to refresh a bit. Thx for the update.
This
is one of the most detailed interview questions list on hadoop ecosystem. I created this after my
Cloudera Certified Hadoop Developer (CCHD) Certification. Hope you find it useful. Let me know if
more questions and answers need to be added to this list.
+1
2
1 Reply
Hi,
good content to viewers hadoop experts provides best online training on
<a href="http://mjtrainings.com/hadoop-online-training">hadoop online training</a>
by real time experienced experts
+1
2
1 Reply
1 Reply
1 Reply
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
5/7
01/08/2015
1 Reply
Hi
How can I change the slot number in hadoop ?How should i changemapred-site.xml in hadoop to
change the slot number according to my formula?could you please tell me the code?!
1 Reply
Wonderful
explanation on HDFS architecture.
Thank you for sharing..
1 Reply
Nice
questions..definately of great help. Thanks a lot!
http://www.bestandroidtrainingchennai.in/
1
Really
is very interesting, I saw your website and get more details..Nice work. Thanks regards,
Refer this link below
http://www.sastraininginchennai.in
1 Reply
Question
:
How the Namenode knows the location of the datanode, that it is connected in the local rack or
remote rack. Or how namenode find the location of the rack and node.?
+3
4
1 Reply
Thanks
to Share the LoadRunner Material for Freshers,Link as,
http://www.loadrunnertraining.in
1 Reply
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
6/7
01/08/2015
1 Reply
Show more
...
http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html
7/7