Hadoop-HDFS summary (two)

Shell operation of HDFS

  1. Basic grammar
bin/hadoop fs 具体命令  OR   bin/hadoop dfs  具体命令 

dfs is the reality class of fs
2. Summary of commands

Usage: hadoop fs [generic options]
 [-appendToFile <localsrc> ... <dst>]
 [-cat [-ignoreCrc] <src> ...]
 [-checksum <src> ...]
 [-chgrp [-R] GROUP PATH...]
 [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
 [-chown [-R] [OWNER][:[GROUP]] PATH...]
 [-copyFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
 [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
 [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] <path> ...]
 [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
 [-createSnapshot <snapshotDir> [<snapshotName>]]
 [-deleteSnapshot <snapshotDir> <snapshotName>]
 [-df [-h] [<path> ...]]
 [-du [-s] [-h] [-x] <path> ...]
 [-expunge]
 [-find <path> ... <expression> ...]
 [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
 [-getfacl [-R] <path>]
 [-getfattr [-R] {
    
    -n name | -d} [-e en] <path>]
 [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
 [-help [cmd ...]]
 [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
 [-mkdir [-p] <path> ...]
 [-moveFromLocal <localsrc> ... <dst>]
 [-moveToLocal <src> <localdst>]
 [-mv <src> ... <dst>]
 [-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
 [-renameSnapshot <snapshotDir> <oldName> <newName>]
 [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
 [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
 [-setfacl [-R] [{
    
    -b|-k} {
    
    -m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
 [-setfattr {
    
    -n name [-v value] | -x name} <path>]
 [-setrep [-R] [-w] <rep> <path> ...]
 [-stat [format] <path> ...]
 [-tail [-f] <file>]
 [-test -[defsz] <path>]
 [-text [-ignoreCrc] <src> ...]
 [-touchz <path> ...]
 [-truncate [-w] <length> <path> ...]
 [-usage [cmd ...]]
  1. Specific command practice

(1) Start the Hadoop cluster

[root@bigdata111 hadoop-2.8.4]# sbin/start-dfs.sh
[root@bigdata111 hadoop-2.8.4]# sbin/start-yarn.sh

(2) -help: output this command parameter

[root@bigdata111 hadoop-2.8.4]# hadoop fs -help rm
-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ... :
  Delete all files that match the specified file pattern. Equivalent to the Unix
  command "rm <src>"
                                                                                 
  -f          If the file does not exist, do not display a diagnostic message or 
              modify the exit status to reflect an error.                        
  -[rR]       Recursively deletes directories.                                   
  -skipTrash  option bypasses trash, if enabled, and immediately deletes <src>.  
  -safely     option requires safety confirmation, if enabled, requires          
              confirmation before deleting large directory with more than        
              <hadoop.shell.delete.limit.num.files> files. Delay is expected when
              walking over large directory recursively to count the number of    
              files to be deleted before the confirmation.  

(3) -ls: display directory information

[root@bigdata111 hadoop-2.8.4]# hadoop fs -ls/

(4) -moveFromLocal: cut and paste from local to hdfs

[root@bigdata111 hadoop-2.8.4]# touch test.txt
[root@bigdata111 hadoop-2.8.4]# hadoop fs -moveFromLocal ./test.txt / software

(5) -appendToFile: append a file to the end of the existing file

[root@bigdata111 hadoop-2.8.4]# touch test1.txt
[root@bigdata111 hadoop-2.8.4]# hadoop fs -appendToFile test1.txt  opt/software/test.txt

(6) -cat: display file content

[root@bigdata111 hadoop-2.8.4]# hadoop fs -cat opt/sofeware/test1.txt

(7) -copyFromLocal (put): copy files from the local system to hdfs

[root@bigdata111 hadoop-2.8.4]# hadoop fs -copyFromLocal test.txt /

(8) -copyToLocal (get): copy from HDFS to local

[root@bigdata111 hadoop-2.8.4]# hadoop fs -copyToLocal opt/software/test.txt

(9) -setrep sets the number of copies of files in HDFS

[root@bigdata111 hadoop-2.8.4]# hadoop fs -setrep 副本数 opt/software/test.txt

The number of replicas set here is only recorded in the metadata of the NameNode, otherwise there will be so many replicas, it also depends on the number of NameNodes, if it is 3 devices, the maximum is 3 replicas, only when the number of nodes increases to 10, the number of replicas To reach 10.

Guess you like

Origin blog.csdn.net/qq_45092505/article/details/105270038