SlideShare a Scribd company logo
Hadoop Deployment and Configuration -Single machine and a cluster
Typical Hardware 
•HP Compaq 8100 Elite CMT PC Specifications · Processor: Intel Core i7-860 · RAM: 8GB PC3-10600 Memory (2X4GB) · HDD: 1TB SATA 3.5 · Network: Intel 82578 GbE(Integrated) 
•Network switch 
–NetgearGS2608Specifications · N Port · 10/100/1000 Mbps Gigabit Switch 
•Gateway node 
–Dell OptiplexGX280Specifications · Processor: Intel Pentium 4 2.80 GHz · RAM: 1GB
OS 
•Install the Ubuntu Server (Maverick Meerkat) operating system that is available for download from the Ubuntu releases site. 
•Some important points to remember while installing the OS 
–Ensure that the SSH server is selected to be installed 
–Enter the proxy details needed for systems to connect to the internet from within your network 
–Create a user on each installation 
•Preferably with the same password on each node
Prerequisites 
•Supported Platforms 
–GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. 
–Win32 is supported as adevelopment platform. Distributed operation has not been well tested on Win32, so it is not supported as aproduction platform. 
•Required Software 
–Required software for Linux and Windows include: 
•JavaTM1.7.x, preferably from Sun, must be installed. 
•sshmust be installed andsshdmust be running to use the Hadoop scripts that manage remote Hadoop daemons. 
•Additional requirements for Windows include: 
–Cygwin-Required for shell support in addition to the required software above. 
•Installing Software 
–If your cluster doesn't have the requisite software you will need to install it. 
–For example on Ubuntu Linux: 
•$ sudoapt-get install ssh$ sudoapt-get install rsync 
•On Windows, if you did not install the required software when you installed cygwin, start the cygwininstaller and select the packages: 
–openssh-theNetcategory
Install Sun’s java JDK 
•Install Sun’s java JDK on each node in the cluster 
•Add the canonical partner repository to your list of apt repositories. 
•You can do this by adding the line below into your /etc/apt/sources.listfiledebhttp://archive.canonical.com/ maverick partner 
•Update the source list 
–sudoapt-get update 
•Install sun-java7-jdk 
–sudoapt-get install sun-java6-jdk 
•Select Sun’s java as the default on the machine 
–sudoupdate-java-alternatives -s java-6-sun 
•Verify the installation running the command 
–java –version
Adding a dedicated Hadoop system user 
•Use a dedicated Hadoop user account for running Hadoop. 
•While that’s not required it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (think: security, permissions, backups, etc). 
•This will add the userhduserand the grouphadoopto your local machine: 
–$ sudoaddgrouphadoop 
–$ sudoadduser--ingrouphadoophduser
Configuring SSH 
•Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it. 
•For single-node setup of Hadoop, we therefore need to configure SSH access tolocalhostfor thehduseruser we created in the previous slide. 
•Have SSH up and running on your machine and configured it to allow SSH public key authentication. http://ubuntuguide.org/ 
•Generate an SSH key for thehduseruser. 
user@ubuntu:~$ su-hduser 
hduser@ubuntu:~$ ssh-keygen-t rsa-P "" 
Generating public/private rsakey pair. 
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'. 
Your identification has been saved in /home/hduser/.ssh/id_rsa. 
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. 
The key fingerprint is: 
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu 
The key's randomartimage is: 
[...snipp...] 
hduser@ubuntu:~$
Configuring SSH 
•Second, you have to enable SSH access to your local machine with this newly created key. 
–hduser@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys 
•The final step is to test the SSH setup by connecting to your local machine with thehduseruser. 
•The step is also needed to save your local machine’s host key fingerprint to thehduseruser’sknown_hostsfile. 
•If you have any special SSH configuration for your local machine like a non- standard SSH port, you can define host-specific SSH options in$HOME/.ssh/config(seeman ssh_configfor more information). 
hduser@ubuntu:~$ sshlocalhost 
The authenticity of host 'localhost(::1)' can't be established. 
RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26. 
Are you sure you want to continue connecting (yes/no)? yes 
Warning: Permanently added 'localhost' (RSA) to the list of known hosts. 
Linux ubuntu2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux 
Ubuntu 10.04 LTS 
[...snipp...] 
hduser@ubuntu:~$
Disabling IPv6 
•One problem with IPv6 on Ubuntu is that using0.0.0.0for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses. 
•To disable IPv6 on Ubuntu 10.04 LTS, open/etc/sysctl.confin the editor of your choice and add the following lines to the end of the file: 
•You have to reboot your machine in order to make the changes take effect. 
•You can check whether IPv6 is enabled on your machine with the following command: 
•You can also disable IPv6 only for Hadoop as documented inHADOOP-3437. You can do so by adding the following line toconf/hadoop-env.sh: 
#disable ipv6 
net.ipv6.conf.all.disable_ipv6 = 1 
net.ipv6.conf.default.disable_ipv6 = 1 
net.ipv6.conf.lo.disable_ipv6 = 1 
$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6 
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Hadoop Installation 
•You have todownload Hadoopfrom theApache Download Mirrorsand extract the contents of the Hadoop package to a location of your choice. 
•Say/usr/local/hadoop. 
•Make sure to change the owner of all the files to thehduseruser andhadoopgroup, for example: 
•Create a symlinkfromhadoop-xxxxxtohadoop 
$ cd/usr/local 
$ sudotar xzfhadoop-xxxx.tar.gz 
$ sudomv hadoop-xxxxxhadoop 
$ sudochown-R hduser:hadoophadoop
Update $HOME/.bashrc 
•Add the following lines to the end of the$HOME/.bashrcfile of userhduser. 
•If you use a shell other than bash, you should of course update its appropriate configuration files instead of.bashrc. 
# Set Hadoop-related environment variables 
export HADOOP_HOME=/usr/local/hadoop 
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on) 
export JAVA_HOME=/usr/lib/jvm/java-6-sun 
# Some convenient aliases and functions for running Hadoop-related commands 
unaliasfs&> /dev/null 
alias fs="hadoopfs" 
unaliashls&> /dev/null 
alias hls="fs-ls"
Update $HOME/.bashrc 
# If you have LZO compression enabled in your Hadoop cluster and 
# compress job outputs with LZOP (not covered in this tutorial): 
# Conveniently inspect an LZOP compressed file from the command 
# line; run via: 
# 
# $ lzohead/hdfs/path/to/lzop/compressed/file.lzo 
# 
# Requires installed 'lzop' command. 
# 
lzohead() { 
hadoopfs-cat $1 | lzop-dc | head -1000 | less 
} 
# Add Hadoop bin/ directory to PATH 
export PATH=$PATH:$HADOOP_HOME/bin
Configuration files 
•The$HADOOP_INSTALL/hadoop/confdirectory contains some configuration files for Hadoop. These are: 
•hadoop-env.sh-This file contains some environment variable settings used by Hadoop. You can use these to affect some aspects of Hadoop daemon behavior, such as where log files are stored, the maximum amount of heap used etc. The only variable you should need to change in this file is JAVA_HOME, which specifies the path to the Java 1.5.x installation used by Hadoop. 
•slaves-This file lists the hosts, one per line, where the Hadoop slave daemons (datanodesand tasktrackers) will run. By default this contains the single entry localhost 
•hdfs-site.xml-This file contains generic default settings for Hadoop daemons and Map/Reduce jobs. Do not modify this file. 
•mapred-site.xml-This file contains site specific settings for the Hadoop Map/Reduce daemons and jobs. The file is empty by default. Putting configuration properties in this file will override Map/Reduce settings in the hadoop-default.xml file. Use this file to tailor the behavior of Map/Reduce on your site. 
•core-site.xml-This file contains site specific settings for all Hadoop daemons and Map/Reduce jobs. This file is empty by default. Settings in this file override those in hadoop-default.xml and mapred-default.xml. This file should contain settings that must be respected by all servers and clients in a Hadoop installation, for instance, the location of the namenode and the jobtracker.
Configuration : Single node 
•hadoop-env.sh : 
–The only required environment variable we have to configure for Hadoop in this case isJAVA_HOME. 
–Open etc/hadoop/conf/hadoop-env.shin the editor of your choice 
–set theJAVA_HOMEenvironment variable to the Sun JDK/JRE 6 directory 
–export JAVA_HOME=/usr/lib/jvm/java-6-sun 
•conf/*-site.xml 
–We configure following: 
–core-site.xml 
•hadoop.tmp.dir 
•fs.default.name 
–mapred-site.xml 
•mapred.job.tracker 
–hdfs-site.xml 
•dfs.replication
Configure HDFS 
•We will configure the directory where Hadoop will store its data files, the network ports it listens to, etc. 
•Our setup will use Hadoop’sDistributed File System,HDFS, even though our little “cluster” only contains our single local machine. 
•You can leave the settings below ”as is” with the exception of thehadoop.tmp.dirvariable which you have to change to the directory of your choice. 
•We will use the directory/app/hadoop/tmp 
•Hadoop’sdefault configurations usehadoop.tmp.diras the base temporary directory both for the local file system and HDFS, so don’t be surprised if you see Hadoop creating the specified directory automatically on HDFS at some later point. 
$ sudomkdir-p /app/hadoop/tmp 
$ sudochownhduser:hadoop/app/hadoop/tmp 
# ...and if you want to tighten up security, chmodfrom 755 to 750... 
$ sudochmod750 /app/hadoop/tmp
conf/core-site.xml 
<!--In: conf/core-site.xml --> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/app/hadoop/tmp</value> 
<description>A base for other temporary directories.</description> 
</property> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:54310</value> 
<description>The name of the default file system. A URI whose 
scheme and authority determine the FileSystemimplementation. The 
uri'sscheme determines the configproperty (fs.SCHEME.impl) naming 
the FileSystemimplementation class. The uri'sauthority is used to 
determine the host, port, etc. for a filesystem.</description> 
</property>
conf/mapred-site.xml 
<!--In: conf/mapred-site.xml --> 
<property> 
<name>mapred.job.tracker</name> 
<value>localhost:54311</value> 
<description>The host and port that the MapReduce job tracker runs 
at. If "local", then jobs are run in-process as a single map 
and reduce task. 
</description> 
</property>
conf/hdfs-site.xml 
<!--In: conf/hdfs-site.xml --> 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
<description>Default block replication. 
The actual number of replications can be specified when the file is created. 
The default is used if replication is not specified in create time. 
</description> 
</property>
Formatting the HDFS and Starting 
•To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable), run the command 
•hadoopnamenode –format 
•Run start-all.sh : This will startup a Namenode, Datanode, Jobtrackerand a Tasktrackeron your machine 
•Run stop-all.sh to stop all processes
Download example input data 
•Create a directory inside /home/…/gutenberg 
•Download: 
–The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson http://www.gutenberg.org/ebooks/20417.txt.utf-8 
–The Notebooks of Leonardo DaVincihttp://www.gutenberg.org/cache/epub/5000/pg5000.txt 
–Ulysses by James Joycehttp://www.gutenberg.org/cache/epub/4300/pg4300.txt 
•Copy local example data to HDFS 
–hdfsdfs-copyFromLocalgutenberggutenberg 
•Check 
–hadoopdfs-lsgutenberg
Run the MapReduce job 
•Now, we run the WordCountexample job 
•hadoopjar /usr/lib/hadoop/hadoop-xxxx-example.jar wordcountgutenberggutenberg-out 
•This command will 
–read all the files in the HDFS directory/user/cloudera/gutenberg, 
–process it, and 
–store the result in the HDFS directory/user/cloudera/gutenberg-out 
•Check if the result is successfully stored in HDFS directorygutenberg-out 
–hdfsdfs–lsgutenberg-out 
•Retrieve the job result from HDFS 
–hdfsdfs–cat gutenberg-out/part-r-00000 
•Better: 
–hdfsdfs–cat gutenberg-out/part-r-00000 | sort –nk2,2 –r | less
Hadoop Web Interfaces 
•Hadoop comes with several web interfaces which are by default (seeconf/hadoop-default.xml) available at these locations: 
•http://localhost:50030/–web UI for MapReduce job tracker(s) 
•http://localhost:50060/–web UI for task tracker(s) 
•http://localhost:50070/–web UI for HDFS name node(s)
Cluster setup 
•Basic idea 
Box 1 
Single Node Cluster 
Master 
Box 2 
Single Node Cluster 
Master 
What we have done so far 
Master 
Slave 
Gateway 
Switch 
LAN 
Use BitviseTunnelierSSH port forwarding
Calling by name 
•Now that you have two single-node clusters up and running, we will modify the Hadoop configuration to make 
•one Ubuntu box the ”master” (which will also act as a slave) and 
•the other Ubuntu box a ”slave”. 
•We will call the designated master machine just themasterfrom now on and the slave-only machine the slave. 
•We will also give the two machines these respective hostnames in their networking setup, most notably in/etc/hosts. 
•If the hostnames of your machines are different (e.g.node01) then you must adapt the settings as appropriate.
Networking 
•connect both machines via a single hub or switch and configure the network interfaces to use a common network such as192.168.0.x/24. 
•To make it simple, 
•we will assign the IP address192.168.0.1to themastermachine and 
•192.168.0.2to theslavemachine. 
•Update/etc/hostson both machines with the following lines: 
# /etc/hosts (for master AND slave) 
192.168.0.1 master 
192.168.0.2 slave
SSH access 
•Thehduseruser on themaster(akahduser@master) must be able to connect a) to its own user account on the master–i.e.sshmasterin this context and not necessarilysshlocalhost–and b) to thehduseruser account on theslave(akahduser@slave) via a password-less SSH login. 
•you just have to add thehduser@master‘spublic SSH key (which should be in$HOME/.ssh/id_rsa.pub) to theauthorized_keysfile ofhduser@slave(in this user’s$HOME/.ssh/authorized_keys). 
•ssh-copy-id -i$HOME/.ssh/id_rsa.pub hduser@slave 
•Verify that the password-less access to all slaves from the master workssshhduser@slavesshhduser@master
How the final multi-node cluster will look like
Naming again 
•The master node will run the “master” daemons for each layer: 
–NameNodefor the HDFS storage layer, and 
–JobTrackerfor the MapReduce processing layer 
•Both machines will run the “slave” daemons: 
–DataNodefor the HDFS layer, and 
–TaskTrackerfor MapReduce processing layer 
•The “master” daemons are responsible for coordination and management of the “slave” daemons while the latter will do the actual data storage and data processing work. 
•Typically one machine in the cluster is designated as the NameNodeand another machine the as JobTracker, exclusively. 
•These are the actual “master nodes”. 
•The rest of the machines in the cluster act as both DataNodeand TaskTracker. 
•These are the slaves or “worker nodes”.
conf/masters (masteronly) 
•Theconf/mastersfile defines on which machines Hadoop will startsecondary NameNodesin our multi-node cluster. 
•In our case, this is just themastermachine. 
•The primary NameNodeand the JobTrackerwill always be the machines on which you run thebin/start-dfs.shandbin/start- mapred.shscripts, respectively 
•The primary NameNodeand the JobTrackerwill be started on the same machine if you runbin/start-all.sh 
•Onmaster, update/conf/mastersthat it looks like this: master
conf/slaves (masteronly) 
•Thisconf/slavesfile lists the hosts, one per line, where the Hadoop slave daemons (DataNodesand TaskTrackers) will be run. 
•We want both themasterbox and theslavebox to act as Hadoop slaves because we want both of them to store and process data. 
•Onmaster, updateconf/slavesthat it looks like this: 
•If you have additional slave nodes, just add them to theconf/slavesfile, one per line (do this on all machines in the cluster). 
master 
slave
conf/*-site.xml(all machines) 
•You have to change the configuration files 
•conf/core-site.xml, 
•conf/mapred-site.xml and 
•conf/hdfs-site.xml 
•on ALL machines: 
–fs.default.name : The name of the default file system. A URI whose scheme and authority determine the FileSystemimplementation. Set as hdfs://master:54310 
–mapred.job.tracker: The host and port that the MapReduce job tracker runs at. Set as master:54311 
–dfs.replication: Default block replication. Set as 2 
–mapred.local.dir: Determines where temporary MapReduce data is written. It also may be a list of directories. 
–mapred.map.tasks: As a rule of thumb, use 10x the number of slaves (i.e., number of TaskTrackers). 
–mapred.reduce.tasks: As a rule of thumb, use 2x the number of slave processors (i.e., number of TaskTrackers).
Formatting the HDFS and Starting 
•To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable on the NameNode), run the command 
•hdfsnamenode –format 
•Starting the multi-node cluster 
–Starting the cluster is done in two steps. 
–First, the HDFS daemons are started: start-dfs.sh 
•NameNodedaemon is started on master, and 
•DataNodedaemons are started on all slaves (here:masterandslave) 
–Second, the MapReduce daemons are started: start-mapred.sh 
•JobTrackeris started onmaster, and 
•TaskTrackerdaemons are started on all slaves (here:masterandslave) 
•stop-mapred.sh followed by stop-dfs.sh to stop
End of session 
Day –1: Hadoop Deployment and Configuration -Single machine and a cluster 
Run the PiEstimatorexamplehadoopjar /usr/lib/hadoop/hadoop-xxxxx-example.jar pi 2 100000
Ad

Recommended

03 pig intro
03 pig intro
Subhas Kumar Ghosh
 
Session 03 - Hadoop Installation and Basic Commands
Session 03 - Hadoop Installation and Basic Commands
AnandMHadoop
 
Hive Quick Start Tutorial
Hive Quick Start Tutorial
Carl Steinbach
 
Hadoop 24/7
Hadoop 24/7
Allen Wittenauer
 
Apache HDFS - Lab Assignment
Apache HDFS - Lab Assignment
Farzad Nozarian
 
06 pig-01-intro
06 pig-01-intro
Aasim Naveed
 
Hadoop & HDFS for Beginners
Hadoop & HDFS for Beginners
Rahul Jain
 
Big data, just an introduction to Hadoop and Scripting Languages
Big data, just an introduction to Hadoop and Scripting Languages
Corley S.r.l.
 
Introduction to hadoop administration jk
Introduction to hadoop administration jk
Edureka!
 
Apache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl
 
Hw09 Monitoring Best Practices
Hw09 Monitoring Best Practices
Cloudera, Inc.
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon
 
Cross-Site BigTable using HBase
Cross-Site BigTable using HBase
HBaseCon
 
Optimizing your Infrastrucure and Operating System for Hadoop
Optimizing your Infrastrucure and Operating System for Hadoop
DataWorks Summit
 
Hive Anatomy
Hive Anatomy
nzhang
 
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Cloudera, Inc.
 
Sqoop
Sqoop
Prashant Gupta
 
Improving Hadoop Performance via Linux
Improving Hadoop Performance via Linux
Alex Moundalexis
 
8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker
Fabio Fumarola
 
HBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table Snapshots
Cloudera, Inc.
 
Rigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance Measurement
DataWorks Summit
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
Improving Hadoop Cluster Performance via Linux Configuration
Improving Hadoop Cluster Performance via Linux Configuration
Alex Moundalexis
 
HBase 0.20.0 Performance Evaluation
HBase 0.20.0 Performance Evaluation
Schubert Zhang
 
Apache Pig: A big data processor
Apache Pig: A big data processor
Tushar B Kute
 
Beginning hive and_apache_pig
Beginning hive and_apache_pig
Mohamed Ali Mahmoud khouder
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
Narayana B
 
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
mas4share
 
Hadoop introduction
Hadoop introduction
Subhas Kumar Ghosh
 
Introduction hadoop adminisrtation
Introduction hadoop adminisrtation
Anjalli Pushpa
 

More Related Content

What's hot (20)

Introduction to hadoop administration jk
Introduction to hadoop administration jk
Edureka!
 
Apache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl
 
Hw09 Monitoring Best Practices
Hw09 Monitoring Best Practices
Cloudera, Inc.
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon
 
Cross-Site BigTable using HBase
Cross-Site BigTable using HBase
HBaseCon
 
Optimizing your Infrastrucure and Operating System for Hadoop
Optimizing your Infrastrucure and Operating System for Hadoop
DataWorks Summit
 
Hive Anatomy
Hive Anatomy
nzhang
 
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Cloudera, Inc.
 
Sqoop
Sqoop
Prashant Gupta
 
Improving Hadoop Performance via Linux
Improving Hadoop Performance via Linux
Alex Moundalexis
 
8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker
Fabio Fumarola
 
HBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table Snapshots
Cloudera, Inc.
 
Rigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance Measurement
DataWorks Summit
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
Improving Hadoop Cluster Performance via Linux Configuration
Improving Hadoop Cluster Performance via Linux Configuration
Alex Moundalexis
 
HBase 0.20.0 Performance Evaluation
HBase 0.20.0 Performance Evaluation
Schubert Zhang
 
Apache Pig: A big data processor
Apache Pig: A big data processor
Tushar B Kute
 
Beginning hive and_apache_pig
Beginning hive and_apache_pig
Mohamed Ali Mahmoud khouder
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
Narayana B
 
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
mas4share
 
Introduction to hadoop administration jk
Introduction to hadoop administration jk
Edureka!
 
Apache HBase Performance Tuning
Apache HBase Performance Tuning
Lars Hofhansl
 
Hw09 Monitoring Best Practices
Hw09 Monitoring Best Practices
Cloudera, Inc.
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon
 
Cross-Site BigTable using HBase
Cross-Site BigTable using HBase
HBaseCon
 
Optimizing your Infrastrucure and Operating System for Hadoop
Optimizing your Infrastrucure and Operating System for Hadoop
DataWorks Summit
 
Hive Anatomy
Hive Anatomy
nzhang
 
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Cloudera, Inc.
 
Improving Hadoop Performance via Linux
Improving Hadoop Performance via Linux
Alex Moundalexis
 
8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker
Fabio Fumarola
 
HBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table Snapshots
Cloudera, Inc.
 
Rigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance Measurement
DataWorks Summit
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
Improving Hadoop Cluster Performance via Linux Configuration
Improving Hadoop Cluster Performance via Linux Configuration
Alex Moundalexis
 
HBase 0.20.0 Performance Evaluation
HBase 0.20.0 Performance Evaluation
Schubert Zhang
 
Apache Pig: A big data processor
Apache Pig: A big data processor
Tushar B Kute
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
Narayana B
 
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
mas4share
 

Viewers also liked (15)

Hadoop introduction
Hadoop introduction
Subhas Kumar Ghosh
 
Introduction hadoop adminisrtation
Introduction hadoop adminisrtation
Anjalli Pushpa
 
Hadoop map reduce in operation
Hadoop map reduce in operation
Subhas Kumar Ghosh
 
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Abdul Nasir
 
Simplifying Use of Hive with the Hive Query Tool
Simplifying Use of Hive with the Hive Query Tool
DataWorks Summit
 
Hadoop data management
Hadoop data management
Subhas Kumar Ghosh
 
Hadoop map reduce v2
Hadoop map reduce v2
Subhas Kumar Ghosh
 
01 hbase
01 hbase
Subhas Kumar Ghosh
 
Hadoop exercise
Hadoop exercise
Subhas Kumar Ghosh
 
Hadoop 2.0 handout 5.0
Hadoop 2.0 handout 5.0
Manaranjan Pradhan
 
03 hive query language (hql)
03 hive query language (hql)
Subhas Kumar Ghosh
 
Apache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data Applications
Hortonworks
 
Configuring Your First Hadoop Cluster On EC2
Configuring Your First Hadoop Cluster On EC2
benjaminwootton
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 
Big Data & Hadoop Tutorial
Big Data & Hadoop Tutorial
Edureka!
 
Introduction hadoop adminisrtation
Introduction hadoop adminisrtation
Anjalli Pushpa
 
Hadoop map reduce in operation
Hadoop map reduce in operation
Subhas Kumar Ghosh
 
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Abdul Nasir
 
Simplifying Use of Hive with the Hive Query Tool
Simplifying Use of Hive with the Hive Query Tool
DataWorks Summit
 
Apache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data Applications
Hortonworks
 
Configuring Your First Hadoop Cluster On EC2
Configuring Your First Hadoop Cluster On EC2
benjaminwootton
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 
Big Data & Hadoop Tutorial
Big Data & Hadoop Tutorial
Edureka!
 
Ad

Similar to 02 Hadoop deployment and configuration (20)

Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Titus Damaiyanti
 
Single node hadoop cluster installation
Single node hadoop cluster installation
Mahantesh Angadi
 
Hadoop installation on windows
Hadoop installation on windows
habeebulla g
 
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Nag Arvind Gudiseva
 
Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14
jijukjoseph
 
Hadoop completereference
Hadoop completereference
arunkumar sadhasivam
 
Hadoop installation
Hadoop installation
Ankit Desai
 
Run wordcount job (hadoop)
Run wordcount job (hadoop)
valeri kopaleishvili
 
Exp-3.pptx
Exp-3.pptx
PraveenKumar581409
 
Hadoop single cluster installation
Hadoop single cluster installation
Minh Tran
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase client
Shashwat Shriparv
 
Hadoop 2.4 installing on ubuntu 14.04
Hadoop 2.4 installing on ubuntu 14.04
baabtra.com - No. 1 supplier of quality freshers
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing Hadoop
Aiden Seonghak Hong
 
Hadoop installation
Hadoop installation
habeebulla g
 
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
ryancox
 
Single node setup
Single node setup
KBCHOW123
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6
Manish Chopra
 
Hadoop single node setup
Hadoop single node setup
Mohammad_Tariq
 
Deploy hadoop cluster
Deploy hadoop cluster
Chirag Ahuja
 
Hadoop Installation
Hadoop Installation
mrinalsingh385
 
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Titus Damaiyanti
 
Single node hadoop cluster installation
Single node hadoop cluster installation
Mahantesh Angadi
 
Hadoop installation on windows
Hadoop installation on windows
habeebulla g
 
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Nag Arvind Gudiseva
 
Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14
jijukjoseph
 
Hadoop installation
Hadoop installation
Ankit Desai
 
Hadoop single cluster installation
Hadoop single cluster installation
Minh Tran
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase client
Shashwat Shriparv
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing Hadoop
Aiden Seonghak Hong
 
Hadoop installation
Hadoop installation
habeebulla g
 
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
July 2010 Triangle Hadoop Users Group - Chad Vawter Slides
ryancox
 
Single node setup
Single node setup
KBCHOW123
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Setting up a HADOOP 2.2 cluster on CentOS 6
Manish Chopra
 
Hadoop single node setup
Hadoop single node setup
Mohammad_Tariq
 
Deploy hadoop cluster
Deploy hadoop cluster
Chirag Ahuja
 
Ad

More from Subhas Kumar Ghosh (18)

07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent
Subhas Kumar Ghosh
 
06 how to write a map reduce version of k-means clustering
06 how to write a map reduce version of k-means clustering
Subhas Kumar Ghosh
 
05 k-means clustering
05 k-means clustering
Subhas Kumar Ghosh
 
02 data warehouse applications with hive
02 data warehouse applications with hive
Subhas Kumar Ghosh
 
06 pig etl features
06 pig etl features
Subhas Kumar Ghosh
 
05 pig user defined functions (udfs)
05 pig user defined functions (udfs)
Subhas Kumar Ghosh
 
04 pig data operations
04 pig data operations
Subhas Kumar Ghosh
 
02 naive bays classifier and sentiment analysis
02 naive bays classifier and sentiment analysis
Subhas Kumar Ghosh
 
Hadoop performance optimization tips
Hadoop performance optimization tips
Subhas Kumar Ghosh
 
Hadoop Day 3
Hadoop Day 3
Subhas Kumar Ghosh
 
Hadoop job chaining
Hadoop job chaining
Subhas Kumar Ghosh
 
Hadoop secondary sort and a custom comparator
Hadoop secondary sort and a custom comparator
Subhas Kumar Ghosh
 
Hadoop combiner and partitioner
Hadoop combiner and partitioner
Subhas Kumar Ghosh
 
Hadoop deconstructing map reduce job step by step
Hadoop deconstructing map reduce job step by step
Subhas Kumar Ghosh
 
Hadoop map reduce concepts
Hadoop map reduce concepts
Subhas Kumar Ghosh
 
Hadoop availability
Hadoop availability
Subhas Kumar Ghosh
 
Hadoop scheduler
Hadoop scheduler
Subhas Kumar Ghosh
 
Greedy embedding problem
Greedy embedding problem
Subhas Kumar Ghosh
 
07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent
Subhas Kumar Ghosh
 
06 how to write a map reduce version of k-means clustering
06 how to write a map reduce version of k-means clustering
Subhas Kumar Ghosh
 
02 data warehouse applications with hive
02 data warehouse applications with hive
Subhas Kumar Ghosh
 
05 pig user defined functions (udfs)
05 pig user defined functions (udfs)
Subhas Kumar Ghosh
 
02 naive bays classifier and sentiment analysis
02 naive bays classifier and sentiment analysis
Subhas Kumar Ghosh
 
Hadoop performance optimization tips
Hadoop performance optimization tips
Subhas Kumar Ghosh
 
Hadoop secondary sort and a custom comparator
Hadoop secondary sort and a custom comparator
Subhas Kumar Ghosh
 
Hadoop combiner and partitioner
Hadoop combiner and partitioner
Subhas Kumar Ghosh
 
Hadoop deconstructing map reduce job step by step
Hadoop deconstructing map reduce job step by step
Subhas Kumar Ghosh
 

Recently uploaded (20)

BCG-Executive-Perspectives-CEOs-Guide-to-Maximizing-Value-from-AI-EP0-3July20...
BCG-Executive-Perspectives-CEOs-Guide-to-Maximizing-Value-from-AI-EP0-3July20...
benediktnetzer1
 
MRI Pulse Sequence in radiology physics.pptx
MRI Pulse Sequence in radiology physics.pptx
BelaynehBishaw
 
最新版美国约翰霍普金斯大学毕业证(JHU毕业证书)原版定制
最新版美国约翰霍普金斯大学毕业证(JHU毕业证书)原版定制
Taqyea
 
Indigo dyeing Presentation (2).pptx as dye
Indigo dyeing Presentation (2).pptx as dye
shreeroop1335
 
最新版美国加利福尼亚大学旧金山法学院毕业证(UCLawSF毕业证书)定制
最新版美国加利福尼亚大学旧金山法学院毕业证(UCLawSF毕业证书)定制
taqyea
 
Camuflaje Tipos Características Militar 2025.ppt
Camuflaje Tipos Características Militar 2025.ppt
e58650738
 
Communication_Skills_Class10_Visual.pptx
Communication_Skills_Class10_Visual.pptx
namanrastogi70555
 
ppt somu_Jarvis_AI_Assistant_presen.pptx
ppt somu_Jarvis_AI_Assistant_presen.pptx
MohammedumarFarhan
 
Starbucks in the Indian market through its joint venture.
Starbucks in the Indian market through its joint venture.
sales480687
 
最新版美国佐治亚大学毕业证(UGA毕业证书)原版定制
最新版美国佐治亚大学毕业证(UGA毕业证书)原版定制
Taqyea
 
25 items quiz for practical research 1 in grade 11
25 items quiz for practical research 1 in grade 11
leamaydayaganon81
 
美国毕业证范本中华盛顿大学学位证书CWU学生卡购买
美国毕业证范本中华盛顿大学学位证书CWU学生卡购买
Taqyea
 
Residential Zone 4 for industrial village
Residential Zone 4 for industrial village
MdYasinArafat13
 
Flextronics Employee Safety Data-Project-2.pptx
Flextronics Employee Safety Data-Project-2.pptx
kilarihemadri
 
最新版美国芝加哥大学毕业证(UChicago毕业证书)原版定制
最新版美国芝加哥大学毕业证(UChicago毕业证书)原版定制
taqyea
 
Artigo - Playing to Win.planejamento docx
Artigo - Playing to Win.planejamento docx
KellyXavier15
 
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
pelaezmaryjoy90
 
Model Evaluation & Visualisation part of a series of intro modules for data ...
Model Evaluation & Visualisation part of a series of intro modules for data ...
brandonlee626749
 
NASA ESE Study Results v4 05.29.2020.pptx
NASA ESE Study Results v4 05.29.2020.pptx
CiroAlejandroCamacho
 
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
CristineGraceAcuyan
 
BCG-Executive-Perspectives-CEOs-Guide-to-Maximizing-Value-from-AI-EP0-3July20...
BCG-Executive-Perspectives-CEOs-Guide-to-Maximizing-Value-from-AI-EP0-3July20...
benediktnetzer1
 
MRI Pulse Sequence in radiology physics.pptx
MRI Pulse Sequence in radiology physics.pptx
BelaynehBishaw
 
最新版美国约翰霍普金斯大学毕业证(JHU毕业证书)原版定制
最新版美国约翰霍普金斯大学毕业证(JHU毕业证书)原版定制
Taqyea
 
Indigo dyeing Presentation (2).pptx as dye
Indigo dyeing Presentation (2).pptx as dye
shreeroop1335
 
最新版美国加利福尼亚大学旧金山法学院毕业证(UCLawSF毕业证书)定制
最新版美国加利福尼亚大学旧金山法学院毕业证(UCLawSF毕业证书)定制
taqyea
 
Camuflaje Tipos Características Militar 2025.ppt
Camuflaje Tipos Características Militar 2025.ppt
e58650738
 
Communication_Skills_Class10_Visual.pptx
Communication_Skills_Class10_Visual.pptx
namanrastogi70555
 
ppt somu_Jarvis_AI_Assistant_presen.pptx
ppt somu_Jarvis_AI_Assistant_presen.pptx
MohammedumarFarhan
 
Starbucks in the Indian market through its joint venture.
Starbucks in the Indian market through its joint venture.
sales480687
 
最新版美国佐治亚大学毕业证(UGA毕业证书)原版定制
最新版美国佐治亚大学毕业证(UGA毕业证书)原版定制
Taqyea
 
25 items quiz for practical research 1 in grade 11
25 items quiz for practical research 1 in grade 11
leamaydayaganon81
 
美国毕业证范本中华盛顿大学学位证书CWU学生卡购买
美国毕业证范本中华盛顿大学学位证书CWU学生卡购买
Taqyea
 
Residential Zone 4 for industrial village
Residential Zone 4 for industrial village
MdYasinArafat13
 
Flextronics Employee Safety Data-Project-2.pptx
Flextronics Employee Safety Data-Project-2.pptx
kilarihemadri
 
最新版美国芝加哥大学毕业证(UChicago毕业证书)原版定制
最新版美国芝加哥大学毕业证(UChicago毕业证书)原版定制
taqyea
 
Artigo - Playing to Win.planejamento docx
Artigo - Playing to Win.planejamento docx
KellyXavier15
 
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
pelaezmaryjoy90
 
Model Evaluation & Visualisation part of a series of intro modules for data ...
Model Evaluation & Visualisation part of a series of intro modules for data ...
brandonlee626749
 
NASA ESE Study Results v4 05.29.2020.pptx
NASA ESE Study Results v4 05.29.2020.pptx
CiroAlejandroCamacho
 
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
CristineGraceAcuyan
 

02 Hadoop deployment and configuration

  • 1. Hadoop Deployment and Configuration -Single machine and a cluster
  • 2. Typical Hardware •HP Compaq 8100 Elite CMT PC Specifications · Processor: Intel Core i7-860 · RAM: 8GB PC3-10600 Memory (2X4GB) · HDD: 1TB SATA 3.5 · Network: Intel 82578 GbE(Integrated) •Network switch –NetgearGS2608Specifications · N Port · 10/100/1000 Mbps Gigabit Switch •Gateway node –Dell OptiplexGX280Specifications · Processor: Intel Pentium 4 2.80 GHz · RAM: 1GB
  • 3. OS •Install the Ubuntu Server (Maverick Meerkat) operating system that is available for download from the Ubuntu releases site. •Some important points to remember while installing the OS –Ensure that the SSH server is selected to be installed –Enter the proxy details needed for systems to connect to the internet from within your network –Create a user on each installation •Preferably with the same password on each node
  • 4. Prerequisites •Supported Platforms –GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. –Win32 is supported as adevelopment platform. Distributed operation has not been well tested on Win32, so it is not supported as aproduction platform. •Required Software –Required software for Linux and Windows include: •JavaTM1.7.x, preferably from Sun, must be installed. •sshmust be installed andsshdmust be running to use the Hadoop scripts that manage remote Hadoop daemons. •Additional requirements for Windows include: –Cygwin-Required for shell support in addition to the required software above. •Installing Software –If your cluster doesn't have the requisite software you will need to install it. –For example on Ubuntu Linux: •$ sudoapt-get install ssh$ sudoapt-get install rsync •On Windows, if you did not install the required software when you installed cygwin, start the cygwininstaller and select the packages: –openssh-theNetcategory
  • 5. Install Sun’s java JDK •Install Sun’s java JDK on each node in the cluster •Add the canonical partner repository to your list of apt repositories. •You can do this by adding the line below into your /etc/apt/sources.listfiledebhttp://archive.canonical.com/ maverick partner •Update the source list –sudoapt-get update •Install sun-java7-jdk –sudoapt-get install sun-java6-jdk •Select Sun’s java as the default on the machine –sudoupdate-java-alternatives -s java-6-sun •Verify the installation running the command –java –version
  • 6. Adding a dedicated Hadoop system user •Use a dedicated Hadoop user account for running Hadoop. •While that’s not required it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (think: security, permissions, backups, etc). •This will add the userhduserand the grouphadoopto your local machine: –$ sudoaddgrouphadoop –$ sudoadduser--ingrouphadoophduser
  • 7. Configuring SSH •Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it. •For single-node setup of Hadoop, we therefore need to configure SSH access tolocalhostfor thehduseruser we created in the previous slide. •Have SSH up and running on your machine and configured it to allow SSH public key authentication. http://ubuntuguide.org/ •Generate an SSH key for thehduseruser. user@ubuntu:~$ su-hduser hduser@ubuntu:~$ ssh-keygen-t rsa-P "" Generating public/private rsakey pair. Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'. Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint is: 9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu The key's randomartimage is: [...snipp...] hduser@ubuntu:~$
  • 8. Configuring SSH •Second, you have to enable SSH access to your local machine with this newly created key. –hduser@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys •The final step is to test the SSH setup by connecting to your local machine with thehduseruser. •The step is also needed to save your local machine’s host key fingerprint to thehduseruser’sknown_hostsfile. •If you have any special SSH configuration for your local machine like a non- standard SSH port, you can define host-specific SSH options in$HOME/.ssh/config(seeman ssh_configfor more information). hduser@ubuntu:~$ sshlocalhost The authenticity of host 'localhost(::1)' can't be established. RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Linux ubuntu2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux Ubuntu 10.04 LTS [...snipp...] hduser@ubuntu:~$
  • 9. Disabling IPv6 •One problem with IPv6 on Ubuntu is that using0.0.0.0for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses. •To disable IPv6 on Ubuntu 10.04 LTS, open/etc/sysctl.confin the editor of your choice and add the following lines to the end of the file: •You have to reboot your machine in order to make the changes take effect. •You can check whether IPv6 is enabled on your machine with the following command: •You can also disable IPv6 only for Hadoop as documented inHADOOP-3437. You can do so by adding the following line toconf/hadoop-env.sh: #disable ipv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 $ cat /proc/sys/net/ipv6/conf/all/disable_ipv6 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
  • 10. Hadoop Installation •You have todownload Hadoopfrom theApache Download Mirrorsand extract the contents of the Hadoop package to a location of your choice. •Say/usr/local/hadoop. •Make sure to change the owner of all the files to thehduseruser andhadoopgroup, for example: •Create a symlinkfromhadoop-xxxxxtohadoop $ cd/usr/local $ sudotar xzfhadoop-xxxx.tar.gz $ sudomv hadoop-xxxxxhadoop $ sudochown-R hduser:hadoophadoop
  • 11. Update $HOME/.bashrc •Add the following lines to the end of the$HOME/.bashrcfile of userhduser. •If you use a shell other than bash, you should of course update its appropriate configuration files instead of.bashrc. # Set Hadoop-related environment variables export HADOOP_HOME=/usr/local/hadoop # Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on) export JAVA_HOME=/usr/lib/jvm/java-6-sun # Some convenient aliases and functions for running Hadoop-related commands unaliasfs&> /dev/null alias fs="hadoopfs" unaliashls&> /dev/null alias hls="fs-ls"
  • 12. Update $HOME/.bashrc # If you have LZO compression enabled in your Hadoop cluster and # compress job outputs with LZOP (not covered in this tutorial): # Conveniently inspect an LZOP compressed file from the command # line; run via: # # $ lzohead/hdfs/path/to/lzop/compressed/file.lzo # # Requires installed 'lzop' command. # lzohead() { hadoopfs-cat $1 | lzop-dc | head -1000 | less } # Add Hadoop bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin
  • 13. Configuration files •The$HADOOP_INSTALL/hadoop/confdirectory contains some configuration files for Hadoop. These are: •hadoop-env.sh-This file contains some environment variable settings used by Hadoop. You can use these to affect some aspects of Hadoop daemon behavior, such as where log files are stored, the maximum amount of heap used etc. The only variable you should need to change in this file is JAVA_HOME, which specifies the path to the Java 1.5.x installation used by Hadoop. •slaves-This file lists the hosts, one per line, where the Hadoop slave daemons (datanodesand tasktrackers) will run. By default this contains the single entry localhost •hdfs-site.xml-This file contains generic default settings for Hadoop daemons and Map/Reduce jobs. Do not modify this file. •mapred-site.xml-This file contains site specific settings for the Hadoop Map/Reduce daemons and jobs. The file is empty by default. Putting configuration properties in this file will override Map/Reduce settings in the hadoop-default.xml file. Use this file to tailor the behavior of Map/Reduce on your site. •core-site.xml-This file contains site specific settings for all Hadoop daemons and Map/Reduce jobs. This file is empty by default. Settings in this file override those in hadoop-default.xml and mapred-default.xml. This file should contain settings that must be respected by all servers and clients in a Hadoop installation, for instance, the location of the namenode and the jobtracker.
  • 14. Configuration : Single node •hadoop-env.sh : –The only required environment variable we have to configure for Hadoop in this case isJAVA_HOME. –Open etc/hadoop/conf/hadoop-env.shin the editor of your choice –set theJAVA_HOMEenvironment variable to the Sun JDK/JRE 6 directory –export JAVA_HOME=/usr/lib/jvm/java-6-sun •conf/*-site.xml –We configure following: –core-site.xml •hadoop.tmp.dir •fs.default.name –mapred-site.xml •mapred.job.tracker –hdfs-site.xml •dfs.replication
  • 15. Configure HDFS •We will configure the directory where Hadoop will store its data files, the network ports it listens to, etc. •Our setup will use Hadoop’sDistributed File System,HDFS, even though our little “cluster” only contains our single local machine. •You can leave the settings below ”as is” with the exception of thehadoop.tmp.dirvariable which you have to change to the directory of your choice. •We will use the directory/app/hadoop/tmp •Hadoop’sdefault configurations usehadoop.tmp.diras the base temporary directory both for the local file system and HDFS, so don’t be surprised if you see Hadoop creating the specified directory automatically on HDFS at some later point. $ sudomkdir-p /app/hadoop/tmp $ sudochownhduser:hadoop/app/hadoop/tmp # ...and if you want to tighten up security, chmodfrom 755 to 750... $ sudochmod750 /app/hadoop/tmp
  • 16. conf/core-site.xml <!--In: conf/core-site.xml --> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystemimplementation. The uri'sscheme determines the configproperty (fs.SCHEME.impl) naming the FileSystemimplementation class. The uri'sauthority is used to determine the host, port, etc. for a filesystem.</description> </property>
  • 17. conf/mapred-site.xml <!--In: conf/mapred-site.xml --> <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property>
  • 18. conf/hdfs-site.xml <!--In: conf/hdfs-site.xml --> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property>
  • 19. Formatting the HDFS and Starting •To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable), run the command •hadoopnamenode –format •Run start-all.sh : This will startup a Namenode, Datanode, Jobtrackerand a Tasktrackeron your machine •Run stop-all.sh to stop all processes
  • 20. Download example input data •Create a directory inside /home/…/gutenberg •Download: –The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson http://www.gutenberg.org/ebooks/20417.txt.utf-8 –The Notebooks of Leonardo DaVincihttp://www.gutenberg.org/cache/epub/5000/pg5000.txt –Ulysses by James Joycehttp://www.gutenberg.org/cache/epub/4300/pg4300.txt •Copy local example data to HDFS –hdfsdfs-copyFromLocalgutenberggutenberg •Check –hadoopdfs-lsgutenberg
  • 21. Run the MapReduce job •Now, we run the WordCountexample job •hadoopjar /usr/lib/hadoop/hadoop-xxxx-example.jar wordcountgutenberggutenberg-out •This command will –read all the files in the HDFS directory/user/cloudera/gutenberg, –process it, and –store the result in the HDFS directory/user/cloudera/gutenberg-out •Check if the result is successfully stored in HDFS directorygutenberg-out –hdfsdfs–lsgutenberg-out •Retrieve the job result from HDFS –hdfsdfs–cat gutenberg-out/part-r-00000 •Better: –hdfsdfs–cat gutenberg-out/part-r-00000 | sort –nk2,2 –r | less
  • 22. Hadoop Web Interfaces •Hadoop comes with several web interfaces which are by default (seeconf/hadoop-default.xml) available at these locations: •http://localhost:50030/–web UI for MapReduce job tracker(s) •http://localhost:50060/–web UI for task tracker(s) •http://localhost:50070/–web UI for HDFS name node(s)
  • 23. Cluster setup •Basic idea Box 1 Single Node Cluster Master Box 2 Single Node Cluster Master What we have done so far Master Slave Gateway Switch LAN Use BitviseTunnelierSSH port forwarding
  • 24. Calling by name •Now that you have two single-node clusters up and running, we will modify the Hadoop configuration to make •one Ubuntu box the ”master” (which will also act as a slave) and •the other Ubuntu box a ”slave”. •We will call the designated master machine just themasterfrom now on and the slave-only machine the slave. •We will also give the two machines these respective hostnames in their networking setup, most notably in/etc/hosts. •If the hostnames of your machines are different (e.g.node01) then you must adapt the settings as appropriate.
  • 25. Networking •connect both machines via a single hub or switch and configure the network interfaces to use a common network such as192.168.0.x/24. •To make it simple, •we will assign the IP address192.168.0.1to themastermachine and •192.168.0.2to theslavemachine. •Update/etc/hostson both machines with the following lines: # /etc/hosts (for master AND slave) 192.168.0.1 master 192.168.0.2 slave
  • 26. SSH access •Thehduseruser on themaster(akahduser@master) must be able to connect a) to its own user account on the master–i.e.sshmasterin this context and not necessarilysshlocalhost–and b) to thehduseruser account on theslave(akahduser@slave) via a password-less SSH login. •you just have to add thehduser@master‘spublic SSH key (which should be in$HOME/.ssh/id_rsa.pub) to theauthorized_keysfile ofhduser@slave(in this user’s$HOME/.ssh/authorized_keys). •ssh-copy-id -i$HOME/.ssh/id_rsa.pub hduser@slave •Verify that the password-less access to all slaves from the master workssshhduser@slavesshhduser@master
  • 27. How the final multi-node cluster will look like
  • 28. Naming again •The master node will run the “master” daemons for each layer: –NameNodefor the HDFS storage layer, and –JobTrackerfor the MapReduce processing layer •Both machines will run the “slave” daemons: –DataNodefor the HDFS layer, and –TaskTrackerfor MapReduce processing layer •The “master” daemons are responsible for coordination and management of the “slave” daemons while the latter will do the actual data storage and data processing work. •Typically one machine in the cluster is designated as the NameNodeand another machine the as JobTracker, exclusively. •These are the actual “master nodes”. •The rest of the machines in the cluster act as both DataNodeand TaskTracker. •These are the slaves or “worker nodes”.
  • 29. conf/masters (masteronly) •Theconf/mastersfile defines on which machines Hadoop will startsecondary NameNodesin our multi-node cluster. •In our case, this is just themastermachine. •The primary NameNodeand the JobTrackerwill always be the machines on which you run thebin/start-dfs.shandbin/start- mapred.shscripts, respectively •The primary NameNodeand the JobTrackerwill be started on the same machine if you runbin/start-all.sh •Onmaster, update/conf/mastersthat it looks like this: master
  • 30. conf/slaves (masteronly) •Thisconf/slavesfile lists the hosts, one per line, where the Hadoop slave daemons (DataNodesand TaskTrackers) will be run. •We want both themasterbox and theslavebox to act as Hadoop slaves because we want both of them to store and process data. •Onmaster, updateconf/slavesthat it looks like this: •If you have additional slave nodes, just add them to theconf/slavesfile, one per line (do this on all machines in the cluster). master slave
  • 31. conf/*-site.xml(all machines) •You have to change the configuration files •conf/core-site.xml, •conf/mapred-site.xml and •conf/hdfs-site.xml •on ALL machines: –fs.default.name : The name of the default file system. A URI whose scheme and authority determine the FileSystemimplementation. Set as hdfs://master:54310 –mapred.job.tracker: The host and port that the MapReduce job tracker runs at. Set as master:54311 –dfs.replication: Default block replication. Set as 2 –mapred.local.dir: Determines where temporary MapReduce data is written. It also may be a list of directories. –mapred.map.tasks: As a rule of thumb, use 10x the number of slaves (i.e., number of TaskTrackers). –mapred.reduce.tasks: As a rule of thumb, use 2x the number of slave processors (i.e., number of TaskTrackers).
  • 32. Formatting the HDFS and Starting •To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable on the NameNode), run the command •hdfsnamenode –format •Starting the multi-node cluster –Starting the cluster is done in two steps. –First, the HDFS daemons are started: start-dfs.sh •NameNodedaemon is started on master, and •DataNodedaemons are started on all slaves (here:masterandslave) –Second, the MapReduce daemons are started: start-mapred.sh •JobTrackeris started onmaster, and •TaskTrackerdaemons are started on all slaves (here:masterandslave) •stop-mapred.sh followed by stop-dfs.sh to stop
  • 33. End of session Day –1: Hadoop Deployment and Configuration -Single machine and a cluster Run the PiEstimatorexamplehadoopjar /usr/lib/hadoop/hadoop-xxxxx-example.jar pi 2 100000