SlideShare a Scribd company logo
How to Install a Single-Node Hadoop Cluster
By Kopaleishvili Valeri
Updated 04/20/2015
Assumptions
1. You’re running 64-bit Windows
2. Your laptop has more than 4 GB of RAM
Download List (No specific order)
 VMWare Player – allows you to run virtual machines with different operating systems
(www.dropbox.com/s/o4773s7mg8l2nox/VMWare-player-5.0.2-1031769.exe)
 Ubuntu 12.04 LTS – Linux operating system with a nice user interface
(www.dropbox.com/s/taeb6jault5siwi/ubuntu-12.04.2-desktop-amd64.iso)
Instructions to Install Hadoop
Next Few Step provide guide on prerequisite requirements for hadoop environment
1. Install VMWare Player
2. Create a new virtual machine
3. Point the installer disc image to the ISO file (Ubuntu) that you just downloaded
4. User name should be hduser
5. Hard disk space 40 GB Hard drive (more is better, but you want to leave some for your Windows
machine)
6. Customize hardware
a. Memory: 2 GB RAM (more is better, but you want to leave some for your Windows machine)
b. Processors: 2 (more is better, but you want to leave some for your Windows machine)
7. Launch your virtual machine (all the instructions after this step will be performed in Ubuntu)
8. Login to hduser
9. Open a terminal window with Ctrl + Alt + T (you will use this keyboard shortcut a lot)
10. Install Java JDK 7
a. Download the Java JDK (https://www.dropbox.com/s/h6bw3tibft3gs17/jdk-7u21-linux-
x64.tar.gz)
b. Unzip the file
tar -xvf jdk-7u21-linux-x64.tar.gz
c. Now move the JDK 7 directory to /usr/lib
sudo mkdir -p /usr/lib/jvm
sudo mv ./jdk1.7.0/usr/lib/jvm/jdk1.7.0
d. Now run
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0/bin/javac" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0/bin/javaws" 1
e. Correct the file ownership and the permissions of the executables:
sudo chmod a+x /usr/bin/java
sudo chmod a+x /usr/bin/javac
sudo chmod a+x /usr/bin/javaws
sudo chown -R root:root /usr/lib/jvm/jdk1.7.0
f. Check the version of you new JDK 7 installation:
java -version
11. Install SSH Server
sudo apt-get install openssh-client
sudo apt-get install openssh-server
12. Configure SSH
su - hduser
ssh-keygen -t rsa -P ""
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
ssh localhost
13. Disabling IPv6 – Run the following command in the extended terminal (Alt + F2)
sudo gedit /etc/sysctl.conf
OR
cd /etc/
vi sysctl.conf
14. Add the following lines to the bottom of the file
# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
15. Save the file and close it
16. Restart your Ubuntu (using command : sudo reboot)
Next Step Explain Installation of hadoop
17. Download Apache Hadoop 1.2.1 (http://fossies.org/linux/misc/hadoop-1.2.1.tar.gz/) and store it the
Downloads folder
18. Unzip the file (open up the terminal window),create usergroup and move download to local folder.
cd Downloads
sudo tar xzf hadoop-1.2.1.tar.gz
cd /usr/local/
sudo mv /home/hduser/Downloads/hadoop-1.2.1 hadoop
sudo addgroup hadoop
sudo chown -R hduser:hadoop hadoop
19. Open your .bashrc file in the extended terminal (Alt + F2)
sudo gedit .bashrc OR
vi ~/.bashrc
20. Add the following lines to the bottom of the file as shown below:
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop/hadoop-1.2.1
export PIG_HOME=/usr/local/pig
export PIG_CLASSPATH=/usr/local/hadoop/hadoop-1.2.1/conf
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$PIG_HOME/bin
21. Save the .bashrc file and close it
22. Run
sudo gedit usr/local/hadoop/hadoop-1.2.1/conf/hadoop-env.sh
OR
vi usr/local/hadoop/hadoop-1.2.1/conf/hadoop-env.sh
23. Add the following lines
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
24. Save and close file
25. In the terminal window, create a directory and set the required ownerships and permissions
sudo mkdir -p /app/hadoop/tmp
sudo chown hduser:hadoop /app/hadoop/tmp
sudo chmod 750 /app/hadoop/tmp
26. Run
sudo gedit /usr/local/hadoop/hadoop-1.2.1/conf/core-site.xml
OR
vi /usr/local/hadoop/hadoop-1.2.1/conf/core-site.xml
27. Add the following between the <configuration> … </configuration> tags
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
As shown below;
28. Save and close file
29. Run
sudo gedit /usr/local/hadoop/hadoop-1.2.1/conf/mapred-site.xml
OR
vi /usr/local/hadoop/hadoop-1.2.1/conf/mapred-site.xml
30. Add the following between the <configuration> … </configuration> tags
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
31. Save and close file
32. Run
sudo gedit /usr/local/hadoop/hadoop-1.2.1/conf/hdfs-site.xml
OR
vi /usr/local/hadoop/hadoop-1.2.1/conf/hdfs-site.xml
33. Add the following between the <configuration> … </configuration> tags
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
The Next Steps are a guide to preparing hdfs after installation and running word count job
34. Format the HDFS
/usr/local/hadoop/hadoop-1.2.1/bin/hadoop namenode -format
35. Start Hadoop services with the following command:
/usr/local/hadoop/bin/start-all.sh
37. A nifty tool for checking whether the expected Hadoop processes are running is jps
38.Restart Ubuntu and login (sudo reboot)
Run a Simple MapReduce Program
1. Download Datasets :
www.gutenberg.org/ebooks/20417
www.gutenberg.org/ebooks/5000
www.gutenberg.org/ebooks/4300
2. Download each ebook as text files in Plain Text UTF-8 encoding and store the files in a local temporary
directory of choice, for example /tmp/gutenberg
ls -l /tmp/gutenberg/
3. Copy the files from our local file system to Hadoop’s HDFS.
hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg
/user/hduser/gutenberg
hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop dfs -ls /user/hduser
hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop dfs -ls /user/hduser/gutenberg
4. Run the WordCount example job
hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop jar hadoop*examples*.jar wordcount
/user/hduser/gutenberg /user/hduser/gutenberg-output
Below is the output of the running job. This information is useful as it gives map reduce job status as it
runs.
The figures below shows the Admin Web Interfaces for checking Job Status and HDFS files
The URL by default at http://localhost:50030/
Run wordcount job (hadoop)

More Related Content

What's hot (18)

PPTX
HADOOP 실제 구성 사례, Multi-Node 구성
Young Pyo
 
DOCX
Single node setup
KBCHOW123
 
PDF
Dev ops
Tom Hall
 
PPTX
Apache Hadoop & Hive installation with movie rating exercise
Shiva Rama Krishna Dasharathi
 
PDF
ZFS Talk Part 1
Steven Burgess
 
DOCX
Ansible ex407 and EX 294
IkiArif1
 
PPTX
2009 cluster user training
Chris Dwan
 
PDF
Set up Hadoop Cluster on Amazon EC2
IMC Institute
 
PDF
Docker and the Oracle Database
Insight Technology, Inc.
 
PDF
Hadoop completereference
arunkumar sadhasivam
 
PDF
Puppet: Eclipsecon ALM 2013
grim_radical
 
PPTX
Drupal from scratch
Rovic Honrado
 
PDF
Refcard en-a4
Arduino Aficionado
 
PDF
Apache HBase - Lab Assignment
Farzad Nozarian
 
PDF
Koha installation BALID
Nur Ahammad
 
PDF
Contribuir a Drupal - Entorno
Keopx
 
PPTX
Implementing Hadoop on a single cluster
Salil Navgire
 
PDF
Bacula Overview
sambismo
 
HADOOP 실제 구성 사례, Multi-Node 구성
Young Pyo
 
Single node setup
KBCHOW123
 
Dev ops
Tom Hall
 
Apache Hadoop & Hive installation with movie rating exercise
Shiva Rama Krishna Dasharathi
 
ZFS Talk Part 1
Steven Burgess
 
Ansible ex407 and EX 294
IkiArif1
 
2009 cluster user training
Chris Dwan
 
Set up Hadoop Cluster on Amazon EC2
IMC Institute
 
Docker and the Oracle Database
Insight Technology, Inc.
 
Hadoop completereference
arunkumar sadhasivam
 
Puppet: Eclipsecon ALM 2013
grim_radical
 
Drupal from scratch
Rovic Honrado
 
Refcard en-a4
Arduino Aficionado
 
Apache HBase - Lab Assignment
Farzad Nozarian
 
Koha installation BALID
Nur Ahammad
 
Contribuir a Drupal - Entorno
Keopx
 
Implementing Hadoop on a single cluster
Salil Navgire
 
Bacula Overview
sambismo
 

Similar to Run wordcount job (hadoop) (20)

PDF
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Titus Damaiyanti
 
PDF
Hadoop single node installation on ubuntu 14
jijukjoseph
 
PDF
02 Hadoop deployment and configuration
Subhas Kumar Ghosh
 
PPTX
Hadoop 2.4 installing on ubuntu 14.04
baabtra.com - No. 1 supplier of quality freshers
 
PDF
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Nag Arvind Gudiseva
 
PPTX
Exp-3.pptx
PraveenKumar581409
 
PPT
Hadoop Installation
mrinalsingh385
 
PPTX
Hadoop installation on windows
habeebulla g
 
DOCX
Hadoop installation
habeebulla g
 
DOCX
Setup and run hadoop distrubution file system example 2.2
Mounir Benhalla
 
PDF
Setting up a HADOOP 2.2 cluster on CentOS 6
Manish Chopra
 
PDF
Hadoop installation steps
Mayank Sharma
 
PDF
R hive tutorial supplement 1 - Installing Hadoop
Aiden Seonghak Hong
 
PDF
Deploy hadoop cluster
Chirag Ahuja
 
DOC
Configure h base hadoop and hbase client
Shashwat Shriparv
 
PPTX
Hadoop installation with an example
Nikita Kesharwani
 
PPT
Big data with hadoop Setup on Ubuntu 12.04
Mandakini Kumari
 
PPTX
Configuring Your First Hadoop Cluster On EC2
benjaminwootton
 
PPTX
Installing hadoop on ubuntu 16
Enrique Davila
 
PPTX
Installing hadoop on ubuntu 16
Enrique Davila
 
Hadoop installation and Running KMeans Clustering with MapReduce Program on H...
Titus Damaiyanti
 
Hadoop single node installation on ubuntu 14
jijukjoseph
 
02 Hadoop deployment and configuration
Subhas Kumar Ghosh
 
Hadoop 2.4 installing on ubuntu 14.04
baabtra.com - No. 1 supplier of quality freshers
 
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Nag Arvind Gudiseva
 
Exp-3.pptx
PraveenKumar581409
 
Hadoop Installation
mrinalsingh385
 
Hadoop installation on windows
habeebulla g
 
Hadoop installation
habeebulla g
 
Setup and run hadoop distrubution file system example 2.2
Mounir Benhalla
 
Setting up a HADOOP 2.2 cluster on CentOS 6
Manish Chopra
 
Hadoop installation steps
Mayank Sharma
 
R hive tutorial supplement 1 - Installing Hadoop
Aiden Seonghak Hong
 
Deploy hadoop cluster
Chirag Ahuja
 
Configure h base hadoop and hbase client
Shashwat Shriparv
 
Hadoop installation with an example
Nikita Kesharwani
 
Big data with hadoop Setup on Ubuntu 12.04
Mandakini Kumari
 
Configuring Your First Hadoop Cluster On EC2
benjaminwootton
 
Installing hadoop on ubuntu 16
Enrique Davila
 
Installing hadoop on ubuntu 16
Enrique Davila
 
Ad

More from valeri kopaleishvili (6)

PPTX
Georgia(格鲁吉亚)
valeri kopaleishvili
 
PDF
Software specification for
valeri kopaleishvili
 
PDF
Erp (sap report)
valeri kopaleishvili
 
PPTX
Big data
valeri kopaleishvili
 
PPTX
Design interpreter pattern
valeri kopaleishvili
 
Georgia(格鲁吉亚)
valeri kopaleishvili
 
Software specification for
valeri kopaleishvili
 
Erp (sap report)
valeri kopaleishvili
 
Design interpreter pattern
valeri kopaleishvili
 
Ad

Recently uploaded (20)

PDF
2025-07-15 EMEA Volledig Inzicht Dutch Webinar
ThousandEyes
 
PDF
Rethinking Security Operations - Modern SOC.pdf
Haris Chughtai
 
PPTX
Simplifying End-to-End Apache CloudStack Deployment with a Web-Based Automati...
ShapeBlue
 
PDF
Novus-Safe Pro: Brochure-What is Novus Safe Pro?.pdf
Novus Hi-Tech
 
PDF
Bitcoin+ Escalando sin concesiones - Parte 1
Fernando Paredes García
 
PDF
How a Code Plagiarism Checker Protects Originality in Programming
Code Quiry
 
PDF
Apache CloudStack 201: Let's Design & Build an IaaS Cloud
ShapeBlue
 
PDF
Trading Volume Explained by CIFDAQ- Secret Of Market Trends
CIFDAQ
 
PPTX
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
PDF
The Past, Present & Future of Kenya's Digital Transformation
Moses Kemibaro
 
PDF
Market Insight : ETH Dominance Returns
CIFDAQ
 
PPTX
Building and Operating a Private Cloud with CloudStack and LINBIT CloudStack ...
ShapeBlue
 
PDF
OpenInfra ID 2025 - Are Containers Dying? Rethinking Isolation with MicroVMs.pdf
Muhammad Yuga Nugraha
 
PPTX
Lecture 5 - Agentic AI and model context protocol.pptx
Dr. LAM Yat-fai (林日辉)
 
PDF
Empowering Cloud Providers with Apache CloudStack and Stackbill
ShapeBlue
 
PDF
Human-centred design in online workplace learning and relationship to engagem...
Tracy Tang
 
PPTX
Extensions Framework (XaaS) - Enabling Orchestrate Anything
ShapeBlue
 
PDF
Arcee AI - building and working with small language models (06/25)
Julien SIMON
 
PPTX
python advanced data structure dictionary with examples python advanced data ...
sprasanna11
 
PDF
CloudStack GPU Integration - Rohit Yadav
ShapeBlue
 
2025-07-15 EMEA Volledig Inzicht Dutch Webinar
ThousandEyes
 
Rethinking Security Operations - Modern SOC.pdf
Haris Chughtai
 
Simplifying End-to-End Apache CloudStack Deployment with a Web-Based Automati...
ShapeBlue
 
Novus-Safe Pro: Brochure-What is Novus Safe Pro?.pdf
Novus Hi-Tech
 
Bitcoin+ Escalando sin concesiones - Parte 1
Fernando Paredes García
 
How a Code Plagiarism Checker Protects Originality in Programming
Code Quiry
 
Apache CloudStack 201: Let's Design & Build an IaaS Cloud
ShapeBlue
 
Trading Volume Explained by CIFDAQ- Secret Of Market Trends
CIFDAQ
 
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
The Past, Present & Future of Kenya's Digital Transformation
Moses Kemibaro
 
Market Insight : ETH Dominance Returns
CIFDAQ
 
Building and Operating a Private Cloud with CloudStack and LINBIT CloudStack ...
ShapeBlue
 
OpenInfra ID 2025 - Are Containers Dying? Rethinking Isolation with MicroVMs.pdf
Muhammad Yuga Nugraha
 
Lecture 5 - Agentic AI and model context protocol.pptx
Dr. LAM Yat-fai (林日辉)
 
Empowering Cloud Providers with Apache CloudStack and Stackbill
ShapeBlue
 
Human-centred design in online workplace learning and relationship to engagem...
Tracy Tang
 
Extensions Framework (XaaS) - Enabling Orchestrate Anything
ShapeBlue
 
Arcee AI - building and working with small language models (06/25)
Julien SIMON
 
python advanced data structure dictionary with examples python advanced data ...
sprasanna11
 
CloudStack GPU Integration - Rohit Yadav
ShapeBlue
 

Run wordcount job (hadoop)

  • 1. How to Install a Single-Node Hadoop Cluster By Kopaleishvili Valeri Updated 04/20/2015 Assumptions 1. You’re running 64-bit Windows 2. Your laptop has more than 4 GB of RAM Download List (No specific order)  VMWare Player – allows you to run virtual machines with different operating systems (www.dropbox.com/s/o4773s7mg8l2nox/VMWare-player-5.0.2-1031769.exe)  Ubuntu 12.04 LTS – Linux operating system with a nice user interface (www.dropbox.com/s/taeb6jault5siwi/ubuntu-12.04.2-desktop-amd64.iso) Instructions to Install Hadoop Next Few Step provide guide on prerequisite requirements for hadoop environment 1. Install VMWare Player 2. Create a new virtual machine 3. Point the installer disc image to the ISO file (Ubuntu) that you just downloaded 4. User name should be hduser 5. Hard disk space 40 GB Hard drive (more is better, but you want to leave some for your Windows machine) 6. Customize hardware a. Memory: 2 GB RAM (more is better, but you want to leave some for your Windows machine) b. Processors: 2 (more is better, but you want to leave some for your Windows machine) 7. Launch your virtual machine (all the instructions after this step will be performed in Ubuntu) 8. Login to hduser 9. Open a terminal window with Ctrl + Alt + T (you will use this keyboard shortcut a lot) 10. Install Java JDK 7 a. Download the Java JDK (https://www.dropbox.com/s/h6bw3tibft3gs17/jdk-7u21-linux- x64.tar.gz) b. Unzip the file tar -xvf jdk-7u21-linux-x64.tar.gz c. Now move the JDK 7 directory to /usr/lib sudo mkdir -p /usr/lib/jvm sudo mv ./jdk1.7.0/usr/lib/jvm/jdk1.7.0
  • 2. d. Now run sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0/bin/java" 1 sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0/bin/javac" 1 sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0/bin/javaws" 1 e. Correct the file ownership and the permissions of the executables: sudo chmod a+x /usr/bin/java sudo chmod a+x /usr/bin/javac sudo chmod a+x /usr/bin/javaws sudo chown -R root:root /usr/lib/jvm/jdk1.7.0 f. Check the version of you new JDK 7 installation:
  • 3. java -version 11. Install SSH Server sudo apt-get install openssh-client sudo apt-get install openssh-server 12. Configure SSH su - hduser ssh-keygen -t rsa -P "" cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys ssh localhost
  • 4. 13. Disabling IPv6 – Run the following command in the extended terminal (Alt + F2) sudo gedit /etc/sysctl.conf OR cd /etc/ vi sysctl.conf 14. Add the following lines to the bottom of the file
  • 5. # disable ipv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 15. Save the file and close it 16. Restart your Ubuntu (using command : sudo reboot) Next Step Explain Installation of hadoop 17. Download Apache Hadoop 1.2.1 (http://fossies.org/linux/misc/hadoop-1.2.1.tar.gz/) and store it the Downloads folder 18. Unzip the file (open up the terminal window),create usergroup and move download to local folder. cd Downloads sudo tar xzf hadoop-1.2.1.tar.gz cd /usr/local/ sudo mv /home/hduser/Downloads/hadoop-1.2.1 hadoop sudo addgroup hadoop sudo chown -R hduser:hadoop hadoop 19. Open your .bashrc file in the extended terminal (Alt + F2) sudo gedit .bashrc OR vi ~/.bashrc 20. Add the following lines to the bottom of the file as shown below:
  • 6. # Set Hadoop-related environment variables export HADOOP_HOME=/usr/local/hadoop/hadoop-1.2.1 export PIG_HOME=/usr/local/pig export PIG_CLASSPATH=/usr/local/hadoop/hadoop-1.2.1/conf # Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on) export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 # Some convenient aliases and functions for running Hadoop-related commands unalias fs &> /dev/null alias fs="hadoop fs" unalias hls &> /dev/null alias hls="fs -ls" # If you have LZO compression enabled in your Hadoop cluster and # compress job outputs with LZOP (not covered in this tutorial): # Conveniently inspect an LZOP compressed file from the command # line; run via: # # $ lzohead /hdfs/path/to/lzop/compressed/file.lzo # # Requires installed 'lzop' command. # lzohead () { hadoop fs -cat $1 | lzop -dc | head -1000 | less } # Add Hadoop bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$PIG_HOME/bin 21. Save the .bashrc file and close it 22. Run sudo gedit usr/local/hadoop/hadoop-1.2.1/conf/hadoop-env.sh OR vi usr/local/hadoop/hadoop-1.2.1/conf/hadoop-env.sh 23. Add the following lines # The java implementation to use. Required. export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
  • 7. 24. Save and close file 25. In the terminal window, create a directory and set the required ownerships and permissions sudo mkdir -p /app/hadoop/tmp sudo chown hduser:hadoop /app/hadoop/tmp sudo chmod 750 /app/hadoop/tmp 26. Run sudo gedit /usr/local/hadoop/hadoop-1.2.1/conf/core-site.xml OR vi /usr/local/hadoop/hadoop-1.2.1/conf/core-site.xml 27. Add the following between the <configuration> … </configuration> tags
  • 8. <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> As shown below; 28. Save and close file 29. Run sudo gedit /usr/local/hadoop/hadoop-1.2.1/conf/mapred-site.xml OR vi /usr/local/hadoop/hadoop-1.2.1/conf/mapred-site.xml 30. Add the following between the <configuration> … </configuration> tags
  • 9. <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> 31. Save and close file 32. Run sudo gedit /usr/local/hadoop/hadoop-1.2.1/conf/hdfs-site.xml OR vi /usr/local/hadoop/hadoop-1.2.1/conf/hdfs-site.xml 33. Add the following between the <configuration> … </configuration> tags <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property>
  • 10. The Next Steps are a guide to preparing hdfs after installation and running word count job 34. Format the HDFS /usr/local/hadoop/hadoop-1.2.1/bin/hadoop namenode -format 35. Start Hadoop services with the following command: /usr/local/hadoop/bin/start-all.sh
  • 11. 37. A nifty tool for checking whether the expected Hadoop processes are running is jps 38.Restart Ubuntu and login (sudo reboot) Run a Simple MapReduce Program 1. Download Datasets : www.gutenberg.org/ebooks/20417 www.gutenberg.org/ebooks/5000 www.gutenberg.org/ebooks/4300 2. Download each ebook as text files in Plain Text UTF-8 encoding and store the files in a local temporary directory of choice, for example /tmp/gutenberg ls -l /tmp/gutenberg/ 3. Copy the files from our local file system to Hadoop’s HDFS. hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /user/hduser/gutenberg hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop dfs -ls /user/hduser hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop dfs -ls /user/hduser/gutenberg 4. Run the WordCount example job hduser@ubuntu:/usr/local/hadoop/hadoop-1.2.1$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output
  • 12. Below is the output of the running job. This information is useful as it gives map reduce job status as it runs. The figures below shows the Admin Web Interfaces for checking Job Status and HDFS files The URL by default at http://localhost:50030/