Intern
Intern
Submitted By:B.PAVITHRA
INTERNSHIP REPORT
Project Coordinator
UNIQ TECHNOLOGIES
CANDIDATE NAME :B.PAVITHRA
Use of eclipse:
Developed using java,the eclipse platform can be used to develop rich client
applications, integrated development environments and other tools. Eclipse can
be used as a IDE for any programming languuage for which plugin available.
String[ ] split(String regex): Its returns an array of strings after splitting an input
string based on the delimiting regular expression.
String[ ] split(string regex,int limit): The only difference between above variation
and this one is that it limits no of strings returned after split up.
Boolean hasNext( )
Object next( )
element.
Void remove()
Removes the current elements. Throws IllegalStateException if an attempt is made to call remove() that
is not precede by call to next().
Apache Hadoop is java framework, we need java installed on our machine to get it
run over operating system. Hadoop supports all java version greater than 5 (i.e.
Java 1.5). So, Here you can also try Java 6, 7 instead of Java 8.
To verify your java installation, you have to fire the following command like,
javac 1.8.0_66
/usr/bin/javac
/usr/lib/jvm/java-8-oracle/jre/bin/java
Step 2:
Installing SSH
SSH (“Secure SHell”) is a protocol for securely accessing one machine from
another. Hadoop uses SSH for accessing another slaves nodes to start and manage
all HDFS and MapReduce daemons.
Now, we have installed SSH over Ubuntu machine so we will be able to connect
with this machine as well as from this machine remotely.
Configuring SSH
Once you installed SSH on your machine, you can connect to other machine or
allow other machines to connect with this machine. However we have this single
machine, we can try connecting with this same machine by SSH. To do this, we
need to copy generated RSA key (i.e. id_rsa.pub) pairs to authorized_keys folder
of SSH installation of this machine by the following command,
Step 3:
Installation Steps
Step 4:
Step 5:
Format Namenode
hadoop@hadoop:~$ start-all.sh
Instead both of these above command you can also use start-all.sh, but its now
deprecated so its not recommended to be used for better Hadoop operations.
Track/Monitor/Verify
hadoop@hadoop:~$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
5647 SecondryNode
Step 6:
If you wish to track Hadoop MapReduce as well as HDFS, you can try exploring
Hadoop web view of ResourceManager and NameNode which are usually used by
hadoop administrators. Open your default browser and visit to the following links.
ResourceManager
NameNode
hadoop@hadoop:~$ cd /usr/local/hadoop
TRAINER NAME:VIGNESH
CODE:
OUTPUT:
Project Coordinator
INTERNSHIP REPORT
Project Coordinator
UNIQ TECHNOLOGIES
CANDIDATE NAME :S. SURABI SRI DHANYA
Use of eclipse:
Developed using java,the eclipse platform can be used to develop rich client
applications, integrated development environments and other tools. Eclipse can
be used as a IDE for any programming languuage for which plugin available.
String[ ] split(String regex): Its returns an array of strings after splitting an input
string based on the delimiting regular expression.
String[ ] split(string regex,int limit): The only difference between above variation
and this one is that it limits no of strings returned after split up.
Boolean hasNext( )
Object next( )
element.
Void remove()
Removes the current elements. Throws IllegalStateException if an attempt is made to call remove() that
is not precede by call to next().
Apache Hadoop is java framework, we need java installed on our machine to get it
run over operating system. Hadoop supports all java version greater than 5 (i.e.
Java 1.5). So, Here you can also try Java 6, 7 instead of Java 8.
To verify your java installation, you have to fire the following command like,
javac 1.8.0_66
/usr/bin/javac
/usr/lib/jvm/java-8-oracle/jre/bin/java
Step 2:
Installing SSH
SSH (“Secure SHell”) is a protocol for securely accessing one machine from
another. Hadoop uses SSH for accessing another slaves nodes to start and manage
all HDFS and MapReduce daemons.
Now, we have installed SSH over Ubuntu machine so we will be able to connect
with this machine as well as from this machine remotely.
Configuring SSH
Once you installed SSH on your machine, you can connect to other machine or
allow other machines to connect with this machine. However we have this single
machine, we can try connecting with this same machine by SSH. To do this, we
need to copy generated RSA key (i.e. id_rsa.pub) pairs to authorized_keys folder
of SSH installation of this machine by the following command,
Step 3:
Installation Steps
Step 4:
Step 5:
Format Namenode
hadoop@hadoop:~$ start-all.sh
Instead both of these above command you can also use start-all.sh, but its now
deprecated so its not recommended to be used for better Hadoop operations.
Track/Monitor/Verify
hadoop@hadoop:~$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
5647 SecondryNode
Step 6:
If you wish to track Hadoop MapReduce as well as HDFS, you can try exploring
Hadoop web view of ResourceManager and NameNode which are usually used by
hadoop administrators. Open your default browser and visit to the following links.
ResourceManager
NameNode
hadoop@hadoop:~$ cd /usr/local/hadoop
TRAINER NAME:VIGNESH
CODE:
OUTPUT:
Project Coordinator
INTERNSHIP REPORT
Project Coordinator
UNIQ TECHNOLOGIES
CANDIDATE NAME :S. NIVETHA
Use of eclipse:
Developed using java,the eclipse platform can be used to develop rich client
applications, integrated development environments and other tools. Eclipse can
be used as a IDE for any programming languuage for which plugin available.
String[ ] split(String regex): Its returns an array of strings after splitting an input
string based on the delimiting regular expression.
String[ ] split(string regex,int limit): The only difference between above variation
and this one is that it limits no of strings returned after split up.
Boolean hasNext( )
Object next( )
element.
Void remove()
Removes the current elements. Throws IllegalStateException if an attempt is made to call remove() that
is not precede by call to next().
Apache Hadoop is java framework, we need java installed on our machine to get it
run over operating system. Hadoop supports all java version greater than 5 (i.e.
Java 1.5). So, Here you can also try Java 6, 7 instead of Java 8.
To verify your java installation, you have to fire the following command like,
javac 1.8.0_66
/usr/bin/javac
/usr/lib/jvm/java-8-oracle/jre/bin/java
Step 2:
Installing SSH
SSH (“Secure SHell”) is a protocol for securely accessing one machine from
another. Hadoop uses SSH for accessing another slaves nodes to start and manage
all HDFS and MapReduce daemons.
Now, we have installed SSH over Ubuntu machine so we will be able to connect
with this machine as well as from this machine remotely.
Configuring SSH
Once you installed SSH on your machine, you can connect to other machine or
allow other machines to connect with this machine. However we have this single
machine, we can try connecting with this same machine by SSH. To do this, we
need to copy generated RSA key (i.e. id_rsa.pub) pairs to authorized_keys folder
of SSH installation of this machine by the following command,
Step 3:
Installation Steps
Step 4:
Step 5:
Format Namenode
hadoop@hadoop:~$ start-all.sh
Instead both of these above command you can also use start-all.sh, but its now
deprecated so its not recommended to be used for better Hadoop operations.
Track/Monitor/Verify
hadoop@hadoop:~$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
5647 SecondryNode
Step 6:
If you wish to track Hadoop MapReduce as well as HDFS, you can try exploring
Hadoop web view of ResourceManager and NameNode which are usually used by
hadoop administrators. Open your default browser and visit to the following links.
ResourceManager
NameNode
hadoop@hadoop:~$ cd /usr/local/hadoop
TRAINER NAME:VIGNESH
CODE:
OUTPUT:
Project Coordinator
INTERNSHIP REPORT
Project Coordinator
UNIQ TECHNOLOGIES
CANDIDATE NAME :S. REKHA SRI
Use of eclipse:
Developed using java,the eclipse platform can be used to develop rich client
applications, integrated development environments and other tools. Eclipse can
be used as a IDE for any programming languuage for which plugin available.
String[ ] split(String regex): Its returns an array of strings after splitting an input
string based on the delimiting regular expression.
String[ ] split(string regex,int limit): The only difference between above variation
and this one is that it limits no of strings returned after split up.
Boolean hasNext( )
Object next( )
element.
Void remove()
Removes the current elements. Throws IllegalStateException if an attempt is made to call remove() that
is not precede by call to next().
Apache Hadoop is java framework, we need java installed on our machine to get it
run over operating system. Hadoop supports all java version greater than 5 (i.e.
Java 1.5). So, Here you can also try Java 6, 7 instead of Java 8.
To verify your java installation, you have to fire the following command like,
javac 1.8.0_66
/usr/bin/javac
/usr/lib/jvm/java-8-oracle/jre/bin/java
Step 2:
Installing SSH
SSH (“Secure SHell”) is a protocol for securely accessing one machine from
another. Hadoop uses SSH for accessing another slaves nodes to start and manage
all HDFS and MapReduce daemons.
Now, we have installed SSH over Ubuntu machine so we will be able to connect
with this machine as well as from this machine remotely.
Configuring SSH
Once you installed SSH on your machine, you can connect to other machine or
allow other machines to connect with this machine. However we have this single
machine, we can try connecting with this same machine by SSH. To do this, we
need to copy generated RSA key (i.e. id_rsa.pub) pairs to authorized_keys folder
of SSH installation of this machine by the following command,
Step 3:
Installation Steps
Step 4:
Step 5:
Format Namenode
hadoop@hadoop:~$ start-all.sh
Instead both of these above command you can also use start-all.sh, but its now
deprecated so its not recommended to be used for better Hadoop operations.
Track/Monitor/Verify
hadoop@hadoop:~$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
5647 SecondryNode
Step 6:
If you wish to track Hadoop MapReduce as well as HDFS, you can try exploring
Hadoop web view of ResourceManager and NameNode which are usually used by
hadoop administrators. Open your default browser and visit to the following links.
ResourceManager
NameNode
Step 7:
hadoop@hadoop:~$ cd /usr/local/hadoop
TRAINER NAME:VIGNESH
CODE:
OUTPUT:
Project Coordinator
Submitted By:R.SAHITHI
INTERNSHIP REPORT
Project Coordinator
UNIQ TECHNOLOGIES
CANDIDATE NAME :R.SAHITHI
Use of eclipse:
Developed using java,the eclipse platform can be used to develop rich client
applications, integrated development environments and other tools. Eclipse can
be used as a IDE for any programming languuage for which plugin available.
String[ ] split(String regex): Its returns an array of strings after splitting an input
string based on the delimiting regular expression.
String[ ] split(string regex,int limit): The only difference between above variation
and this one is that it limits no of strings returned after split up.
Boolean hasNext( )
Object next( )
element.
Void remove()
Removes the current elements. Throws IllegalStateException if an attempt is made to call remove() that
is not precede by call to next().
Apache Hadoop is java framework, we need java installed on our machine to get it
run over operating system. Hadoop supports all java version greater than 5 (i.e.
Java 1.5). So, Here you can also try Java 6, 7 instead of Java 8.
To verify your java installation, you have to fire the following command like,
javac 1.8.0_66
/usr/bin/javac
/usr/lib/jvm/java-8-oracle/jre/bin/java
Step 2:
Installing SSH
SSH (“Secure SHell”) is a protocol for securely accessing one machine from
another. Hadoop uses SSH for accessing another slaves nodes to start and manage
all HDFS and MapReduce daemons.
Now, we have installed SSH over Ubuntu machine so we will be able to connect
with this machine as well as from this machine remotely.
Configuring SSH
Once you installed SSH on your machine, you can connect to other machine or
allow other machines to connect with this machine. However we have this single
machine, we can try connecting with this same machine by SSH. To do this, we
need to copy generated RSA key (i.e. id_rsa.pub) pairs to authorized_keys folder
of SSH installation of this machine by the following command,
Step 3:
Installation Steps
Step 4:
Step 5:
Format Namenode
hadoop@hadoop:~$ start-all.sh
Instead both of these above command you can also use start-all.sh, but its now
deprecated so its not recommended to be used for better Hadoop operations.
Track/Monitor/Verify
hadoop@hadoop:~$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
5647 SecondryNode
Step 6:
If you wish to track Hadoop MapReduce as well as HDFS, you can try exploring
Hadoop web view of ResourceManager and NameNode which are usually used by
hadoop administrators. Open your default browser and visit to the following links.
ResourceManager
NameNode
hadoop@hadoop:~$ cd /usr/local/hadoop
TRAINER NAME:VIGNESH
CODE:
OUTPUT:
Project Coordinator