Hadoop-HDFS总结(二)

HDFS 的Shell操作

  1. 基本语法
bin/hadoop fs 具体命令  OR   bin/hadoop dfs  具体命令 

dfs是fs的现实类
2. 命令总结

Usage: hadoop fs [generic options]
 [-appendToFile <localsrc> ... <dst>]
 [-cat [-ignoreCrc] <src> ...]
 [-checksum <src> ...]
 [-chgrp [-R] GROUP PATH...]
 [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
 [-chown [-R] [OWNER][:[GROUP]] PATH...]
 [-copyFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
 [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
 [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] <path> ...]
 [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
 [-createSnapshot <snapshotDir> [<snapshotName>]]
 [-deleteSnapshot <snapshotDir> <snapshotName>]
 [-df [-h] [<path> ...]]
 [-du [-s] [-h] [-x] <path> ...]
 [-expunge]
 [-find <path> ... <expression> ...]
 [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
 [-getfacl [-R] <path>]
 [-getfattr [-R] {
    
    -n name | -d} [-e en] <path>]
 [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
 [-help [cmd ...]]
 [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
 [-mkdir [-p] <path> ...]
 [-moveFromLocal <localsrc> ... <dst>]
 [-moveToLocal <src> <localdst>]
 [-mv <src> ... <dst>]
 [-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
 [-renameSnapshot <snapshotDir> <oldName> <newName>]
 [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
 [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
 [-setfacl [-R] [{
    
    -b|-k} {
    
    -m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
 [-setfattr {
    
    -n name [-v value] | -x name} <path>]
 [-setrep [-R] [-w] <rep> <path> ...]
 [-stat [format] <path> ...]
 [-tail [-f] <file>]
 [-test -[defsz] <path>]
 [-text [-ignoreCrc] <src> ...]
 [-touchz <path> ...]
 [-truncate [-w] <length> <path> ...]
 [-usage [cmd ...]]
  1. 具体命令实操

(1)启动Hadoop集群

[root@bigdata111 hadoop-2.8.4]# sbin/start-dfs.sh
[root@bigdata111 hadoop-2.8.4]# sbin/start-yarn.sh

(2)-help :输出这个命令参数

[root@bigdata111 hadoop-2.8.4]# hadoop fs -help rm
-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ... :
  Delete all files that match the specified file pattern. Equivalent to the Unix
  command "rm <src>"
                                                                                 
  -f          If the file does not exist, do not display a diagnostic message or 
              modify the exit status to reflect an error.                        
  -[rR]       Recursively deletes directories.                                   
  -skipTrash  option bypasses trash, if enabled, and immediately deletes <src>.  
  -safely     option requires safety confirmation, if enabled, requires          
              confirmation before deleting large directory with more than        
              <hadoop.shell.delete.limit.num.files> files. Delay is expected when
              walking over large directory recursively to count the number of    
              files to be deleted before the confirmation.  

(3)-ls:显示目录信息

[root@bigdata111 hadoop-2.8.4]# hadoop fs -ls/

(4)-moveFromLocal:从本地剪切粘贴到hdfs

[root@bigdata111 hadoop-2.8.4]# touch test.txt
[root@bigdata111 hadoop-2.8.4]# hadoop fs -moveFromLocal ./test.txt / software

(5)-appendToFile:追加一个文件到存在文件的末尾

[root@bigdata111 hadoop-2.8.4]# touch test1.txt
[root@bigdata111 hadoop-2.8.4]# hadoop fs -appendToFile test1.txt  opt/software/test.txt

(6)-cat :显示文件内容

[root@bigdata111 hadoop-2.8.4]# hadoop fs -cat opt/sofeware/test1.txt

(7)-copyFromLocal(put):从本地系统拷贝文件到hdfs中去

[root@bigdata111 hadoop-2.8.4]# hadoop fs -copyFromLocal test.txt /

(8) -copyToLocal (get):从HDFS拷贝到本地

[root@bigdata111 hadoop-2.8.4]# hadoop fs -copyToLocal opt/software/test.txt

(9)-setrep 设置HDFS中文件的副本数量

[root@bigdata111 hadoop-2.8.4]# hadoop fs -setrep 副本数 opt/software/test.txt

这里设置的副本数只记录在NameNode 的元数据中,否则真的会有这么多副本,还得看NameNode数量,如果是3台设备,最多3个副本,只有节点数增加到10时,副本数才能达到10。

猜你喜欢

转载自blog.csdn.net/qq_45092505/article/details/105270038
今日推荐