0% found this document useful (0 votes)
45 views

C:/Users/HP Hdfs Namenode - Format

The document outlines steps for installing and configuring Hadoop and using basic HDFS commands: 1. Format the NameNode and start the HDFS and YARN daemons. 2. Use commands like hdfs dfs -put and -copyFromLocal to copy files from the local file system to HDFS directories. 3. Additional commands covered include -ls, -cat, -get, -cp, -rm, and -tail to interact with files in HDFS.

Uploaded by

SARAVANAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

C:/Users/HP Hdfs Namenode - Format

The document outlines steps for installing and configuring Hadoop and using basic HDFS commands: 1. Format the NameNode and start the HDFS and YARN daemons. 2. Use commands like hdfs dfs -put and -copyFromLocal to copy files from the local file system to HDFS directories. 3. Additional commands covered include -ls, -cat, -get, -cp, -rm, and -tail to interact with files in HDFS.

Uploaded by

SARAVANAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Steps:

1.Format the NameNode


 
Formatting the NameNode is done once when hadoop is installed and not for running hadoop
filesystem, else it will delete all the data inside HDFS. Run this command-
C:\Users\HP> hdfs namenode –format

2. Now change the directory in cmd to sbin folder of hadoop directory with this command,

C:\Users\HP> cd C:\hadoop-3.1.3\hadoop-3.1.3\sbin

3. Start namenode and datanode with this command –

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>start-dfs.cmd
Two more cmd windows will open for NameNode and DataNode (should not close these
windows)

4.Now start yarn through this command-

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>start-yarn.cmd

Two more windows will open, one for yarn resource manager and one for yarn node manager.
(should not close the windows)
Note: Make sure all the 4 Apache Hadoop Distribution windows are up n running. 
Can check it with the command
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>jps
It will list all the windows that are currently running..
5.Working with HDFS
A small text file named new2.txt in my local file system is created and type some content in
that . To put this file new2.txt in hdfs using hdfs command line tool.
6. To create a directory named ‘sample’ in my hadoop directory using the following command-
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –mkdir /sample
7. To verify if the directory is created in hdfs, we will use ‘ls’ command which will list the files
present in hdfs –
C:\hadoop-3.1.3\hadoop-3.1.3\sbin> hdfs dfs –ls /
8.To copy a text file named ‘new2.txt’ from my local file system to this folder(sample) that just
created in hdfs using copyFromLocal command-

C:\hadoop-3.1.3\hadoop-3.1.3\sbin> hdfs dfs –copyFromLocal C:\new2.txt /sample


Or can use put (to copy the file from local file system to the Hadoop filesystem)
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -put C:\new2.txt /sam3
9. To verify if the file is copied to the folder, will use ‘ls’ command by specifying the folder
name which will read the list of files in that folder –
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –ls /sample

10.To view the contents of the file copied, will use cat command-

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –cat /sample/new2.txt

11. To Copy file from hdfs to local directory, will use get command –
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –get /sample/new2.txt C:\testhadoop
12. create one more directory named sam

To copy the file new2.txt from sample1 to sam directory using cp command

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –cp /sample1/new2.txt /sam

Then to test whether it has been copied , use ls command as given below

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –ls /sam

13. mv command moves the files or directories from the source to a destination within HDFS.

Create another directory called sam2 using mkdir command. Then moving the directory sample1
to sam2

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –mv /sample1 /sam2

14. get command copies the file or directory from the Hadoop file system to the local file system.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -get /sam/new2.txt c:\copyhadoop

copyToLocal command copies the file from HDFS to the local file system.(same as get
command)

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -copyToLocal /sam/new2.txt c:\copyhadoop1

15. rm: To delete the file from HDFS.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -rm /sample/cholan.txt

Deleted /sample/cholan.txt

16 . Expunge: makes the trash empty

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs –expunge


17. setrep : changes the replication factor to a specific count instead of the default
replication factor for the file specified in the path.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -setrep 2 /sample/cholan.txt

Replication 2 set: /sample/cholan.txt

18. du prints a summary of the amount of disk usage of all files/directories in the
path.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -du -s /sample/cholan.txt

20963 41926 /sample/cholan.txt

19. df shows the capacity, size, and free space available on the HDFS file system.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -df

Filesystem Size Used Available Use%

hdfs://localhost:9000 104273539072 21454 46568419328 0%

The -h option formats the file size in the human-readable format.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -df -h

Filesystem Size Used Available Use%

hdfs://localhost:9000 97.1 G 21.0 K 43.4 G 0%

20. The fsck Hadoop command is used to check the health of the HDFS.

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs fsck /sample –files

Output:
Connecting to namenode via http://localhost:9870/fsck?ugi=HP&files=1&path=%2Fsample
FSCK started by HP (auth:SIMPLE) from /127.0.0.1 for path /sample at Mon Jul 26 11:09:40
IST 2021
/sample <dir>
/sample/cholan.txt 20963 bytes, replicated: replication=2, 1 block(s): Under replicated BP-
1297826349-10.3.6.109-1627276078364:blk_1073741826_1002. Target Replicas is 2 but
found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).

Status: HEALTHY
Number of data-nodes: 1
Number of racks: 1
Total dirs: 1
Total symlinks: 0

Replicated Blocks:
Total size: 20963 B
Total files: 1
Total blocks (validated): 1 (avg. block size 20963 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 1.0
Missing blocks: 0
Corrupt blocks: 0
Missing replicas: 1 (50.0 %)

Erasure Coded Block Groups:


Total size: 0 B
Total files: 0
Total block groups (validated): 0
Minimally erasure-coded block groups: 0
Over-erasure-coded block groups: 0
Under-erasure-coded block groups: 0
Unsatisfactory placement block groups: 0
Average block group size: 0.0
Missing block groups: 0
Corrupt block groups: 0
Missing internal blocks: 0
FSCK ended at Mon Jul 26 11:09:40 IST 2021 in 74 milliseconds

The filesystem under path '/sample' is HEALTHY

21. tail command shows the last 1KB of a file on console or stdout.


C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -tail /sample/cholan.txt

2021-07-26 11:16:09,197 INFO sasl.SaslDataTransferClient: SASL encryption trust check:


localHostTrusted = false, remoteHostTrusted = false
்தளூர்ச் சாலைக்
களமறூத்தருளி வேங்கை

உடையார் ஸ்ரீராஜராஜ
Excerpts of Rajaraja's inscription from Brihadisvara Temple in Thanjavur (first line in every
image)
Rajaraja recorded all the grants made to the Thanjavur temple and his achievements. He also
preserved the records of his predecessors. An inscription of his reign found at Tirumalavadi
records an order of the king to the effect that the central shrine of the Vaidyanatha temple at the
place should be rebuilt and that, before pulling down the walls, the inscriptions engraved on
them should be copied in a book. The records were subsequently re-engraved on the walls from
the book after the rebuilding was finished.[92]
Another inscription from Gramardhanathesvara temple in South Arcot district dated in the
seventh year of the king refers to the fifteenth year of his predecessor that is Uttama Choladeva
described therein as the son of Sembiyan-Madeviyar.[93]
The -f shows the append data as the file grows.
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>hdfs dfs -tail -f /sample/cholan.txt

Shutdown YARN & HDFS daemons

To stop the services by running the following commands one by one:

C:\hadoop-3.1.3\hadoop-3.1.3\sbin>stop-yarn.cmd
C:\hadoop-3.1.3\hadoop-3.1.3\sbin>stop-dfs.cmd

To resolve resource manager shutdown problem:


1. Use java version 1.8.x_xxx Verify java version in command prompt >java -version
2. copy "hadoop-yarn-server-timelineservice-3.x.x" from
~\hadoop-3.x.x\share\hadoop\yarn\timelineservice to ~\hadoop-
3.x.x\share\hadoop\yarn folder.

need to add an external jar for the packages that we have import. Download the jar
package according to your Hadoop version

https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common

https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-client-core

You might also like