3.5 HDFS基本命令

第3章 HDFS:分布式文件系统

3.5 HDFS基本命令

HDFS命令官方文档: 
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html

3.5.1 用法


      
      
  1. [root@node1 ~] # hdfs dfs
  2. Usage: hadoop fs [generic options]
  3. [-appendToFile <localsrc> ... <dst>]
  4. [-cat [-ignoreCrc] <src> ...]
  5. [-checksum <src> ...]
  6. [-chgrp [-R] GROUP PATH...]
  7. [-chmod [-R] <MODE[,MODE] ... | OCTALMODE> PATH...]
  8. [-chown [-R] [OWNER][:[GROUP]] PATH...]
  9. [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
  10. [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  11. [-count [-q] [-h] [-v] [-x] <path> ...]
  12. [-cp [-f] [-p | -p[topax]] <src> ... <dst>]
  13. [-createSnapshot <snapshotDir> [ <snapshotName>]]
  14. [-deleteSnapshot <snapshotDir> <snapshotName>]
  15. [-df [-h] [ <path> ...]]
  16. [-du [-s] [-h] [-x] <path> ...]
  17. [-expunge]
  18. [-find <path> ... <expression> ...]
  19. [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  20. [-getfacl [-R] <path>]
  21. [-getfattr [-R] {-n name | -d} [-e en] <path>]
  22. [-getmerge [-nl] <src> <localdst>]
  23. [-help [cmd ...]]
  24. [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [ <path> ...]]
  25. [-mkdir [-p] <path> ...]
  26. [-moveFromLocal <localsrc> ... <dst>]
  27. [-moveToLocal <src> <localdst>]
  28. [-mv <src> ... <dst>]
  29. [-put [-f] [-p] [-l] <localsrc> ... <dst>]
  30. [-renameSnapshot <snapshotDir> <oldName> <newName>]
  31. [-rm [-f] [-r|-R] [-skipTrash] <src> ...]
  32. [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
  33. [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
  34. [-setfattr {-n name [-v value] | -x name} <path>]
  35. [-setrep [-R] [-w] <rep> <path> ...]
  36. [-stat [format] <path> ...]
  37. [-tail [-f] <file>]
  38. [-test -[defsz] <path>]
  39. [-text [-ignoreCrc] <src> ...]
  40. [-touchz <path> ...]
  41. [-usage [cmd ...]]
  42. Generic options supported are
  43. -conf <configuration file> specify an application configuration file
  44. -D <property=value> use value for given property
  45. -fs <local|namenode:port> specify a namenode
  46. -jt <local|resourcemanager:port> specify a ResourceManager
  47. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  48. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  49. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  50. The general command line syntax is
  51. bin/hadoop command [genericOptions] [commandOptions]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51

3.5.2 hdfs dfs -mkdir

The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.


      
      
  1. [root @node1 ~] # hdfs dfs -mkdir -p input
  2. [root @node1 ~] # hdfs dfs -mkdir -p /abc
  • 1
  • 2

hdfs创建的目录默认会放到/user/{username}/目录下面,其中{username}是当前用户名。所以input目录应该在/user/root/下面。

在HDFS根目录下创建abc目录。

3.5.3 hdfs dfs -ls


      
      
  1. [root @node1 ~] # hdfs dfs -ls /
  2. Found 2 items
  3. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 40 /abc
  4. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 37 /user
  5. [root @node1 ~] # hdfs dfs -ls /user
  6. Found 1 items
  7. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 37 /user/root
  8. [root @node1 ~] # hdfs dfs -ls /user/root
  9. Found 1 items
  10. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 37 /user/root/input
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3.5.4 hdfs dfs -put

Usage: hdfs dfs -put … 
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system. 
hdfs dfs -put localfile /user/hadoop/hadoopfile 
hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir 
hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile 
hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin. 
Exit Code: 
Returns 0 on success and -1 on error.

灰常灰常感谢原博主的辛苦工作,为防止删博,所以转载,只供学习使用,不做其他任何商业用途。 https://blog.csdn.net/chengyuqiang/article/details/72082070

第3章 HDFS:分布式文件系统

3.5 HDFS基本命令

HDFS命令官方文档: 
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html

3.5.1 用法


    
    
  1. [root@node1 ~] # hdfs dfs
  2. Usage: hadoop fs [generic options]
  3. [-appendToFile <localsrc> ... <dst>]
  4. [-cat [-ignoreCrc] <src> ...]
  5. [-checksum <src> ...]
  6. [-chgrp [-R] GROUP PATH...]
  7. [-chmod [-R] <MODE[,MODE] ... | OCTALMODE> PATH...]
  8. [-chown [-R] [OWNER][:[GROUP]] PATH...]
  9. [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
  10. [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  11. [-count [-q] [-h] [-v] [-x] <path> ...]
  12. [-cp [-f] [-p | -p[topax]] <src> ... <dst>]
  13. [-createSnapshot <snapshotDir> [ <snapshotName>]]
  14. [-deleteSnapshot <snapshotDir> <snapshotName>]
  15. [-df [-h] [ <path> ...]]
  16. [-du [-s] [-h] [-x] <path> ...]
  17. [-expunge]
  18. [-find <path> ... <expression> ...]
  19. [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
  20. [-getfacl [-R] <path>]
  21. [-getfattr [-R] {-n name | -d} [-e en] <path>]
  22. [-getmerge [-nl] <src> <localdst>]
  23. [-help [cmd ...]]
  24. [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [ <path> ...]]
  25. [-mkdir [-p] <path> ...]
  26. [-moveFromLocal <localsrc> ... <dst>]
  27. [-moveToLocal <src> <localdst>]
  28. [-mv <src> ... <dst>]
  29. [-put [-f] [-p] [-l] <localsrc> ... <dst>]
  30. [-renameSnapshot <snapshotDir> <oldName> <newName>]
  31. [-rm [-f] [-r|-R] [-skipTrash] <src> ...]
  32. [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
  33. [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
  34. [-setfattr {-n name [-v value] | -x name} <path>]
  35. [-setrep [-R] [-w] <rep> <path> ...]
  36. [-stat [format] <path> ...]
  37. [-tail [-f] <file>]
  38. [-test -[defsz] <path>]
  39. [-text [-ignoreCrc] <src> ...]
  40. [-touchz <path> ...]
  41. [-usage [cmd ...]]
  42. Generic options supported are
  43. -conf <configuration file> specify an application configuration file
  44. -D <property=value> use value for given property
  45. -fs <local|namenode:port> specify a namenode
  46. -jt <local|resourcemanager:port> specify a ResourceManager
  47. -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
  48. -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
  49. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
  50. The general command line syntax is
  51. bin/hadoop command [genericOptions] [commandOptions]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51

3.5.2 hdfs dfs -mkdir

The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.


    
    
  1. [root @node1 ~] # hdfs dfs -mkdir -p input
  2. [root @node1 ~] # hdfs dfs -mkdir -p /abc
  • 1
  • 2

hdfs创建的目录默认会放到/user/{username}/目录下面,其中{username}是当前用户名。所以input目录应该在/user/root/下面。

在HDFS根目录下创建abc目录。

3.5.3 hdfs dfs -ls


    
    
  1. [root @node1 ~] # hdfs dfs -ls /
  2. Found 2 items
  3. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 40 /abc
  4. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 37 /user
  5. [root @node1 ~] # hdfs dfs -ls /user
  6. Found 1 items
  7. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 37 /user/root
  8. [root @node1 ~] # hdfs dfs -ls /user/root
  9. Found 1 items
  10. drwxr-xr-x - root supergroup 0 2017- 05- 14 09 : 37 /user/root/input
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3.5.4 hdfs dfs -put

Usage: hdfs dfs -put … 
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system. 
hdfs dfs -put localfile /user/hadoop/hadoopfile 
hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir 
hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile 
hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin. 
Exit Code: 
Returns 0 on success and -1 on error.

灰常灰常感谢原博主的辛苦工作,为防止删博,所以转载,只供学习使用,不做其他任何商业用途。 https://blog.csdn.net/chengyuqiang/article/details/72082070

猜你喜欢

转载自blog.csdn.net/zqwzlanbao/article/details/84140949
3.5