Hadoop基础教程-第3章 HDFS:分布式文件系统(3.5 HDFS基本命令)(草稿)

第3章 HDFS:分布式文件系统

3.5 HDFS基本命令

HDFS命令官方文档: 
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html

3.5.1 用法

[root@node1 ~]# hdfs dfs
Usage: hadoop fs [generic options]
    [-appendToFile <localsrc> ... <dst>]
    [-cat [-ignoreCrc] <src> ...]
    [-checksum <src> ...]
    [-chgrp [-R] GROUP PATH...]
    [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
    [-chown [-R] [OWNER][:[GROUP]] PATH...]
    [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
    [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
    [-count [-q] [-h] [-v] [-x] <path> ...]
    [-cp [-f] [-p | -p[topax]] <src> ... <dst>]
    [-createSnapshot <snapshotDir> [<snapshotName>]]
    [-deleteSnapshot <snapshotDir> <snapshotName>]
    [-df [-h] [<path> ...]]
    [-du [-s] [-h] [-x] <path> ...]
    [-expunge]
    [-find <path> ... <expression> ...]
    [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
    [-getfacl [-R] <path>]
    [-getfattr [-R] {-n name | -d} [-e en] <path>]
    [-getmerge [-nl] <src> <localdst>]
    [-help [cmd ...]]
    [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
    [-mkdir [-p] <path> ...]
    [-moveFromLocal <localsrc> ... <dst>]
    [-moveToLocal <src> <localdst>]
    [-mv <src> ... <dst>]
    [-put [-f] [-p] [-l] <localsrc> ... <dst>]
    [-renameSnapshot <snapshotDir> <oldName> <newName>]
    [-rm [-f] [-r|-R] [-skipTrash] <src> ...]
    [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
    [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
    [-setfattr {-n name [-v value] | -x name} <path>]
    [-setrep [-R] [-w] <rep> <path> ...]
    [-stat [format] <path> ...]
    [-tail [-f] <file>]
    [-test -[defsz] <path>]
    [-text [-ignoreCrc] <src> ...]
    [-touchz <path> ...]
    [-usage [cmd ...]]
Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|resourcemanager:port>    specify a ResourceManager
-files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]

3.5.2 hdfs dfs -mkdir

The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.

[root@node1 ~]# hdfs dfs -mkdir -p input
[root@node1 ~]# hdfs dfs -mkdir -p /abc

hdfs创建的目录默认会放到/user/{username}/目录下面,其中{username}是当前用户名。所以input目录应该在/user/root/下面。

在HDFS根目录下创建abc目录。

3.5.3 hdfs dfs -ls

[root@node1 ~]# hdfs dfs -ls /
Found 2 items
drwxr-xr-x   - root supergroup          0 2017-05-14 09:40 /abc
drwxr-xr-x   - root supergroup          0 2017-05-14 09:37 /user
[root@node1 ~]# hdfs dfs -ls /user
Found 1 items
drwxr-xr-x   - root supergroup          0 2017-05-14 09:37 /user/root
[root@node1 ~]# hdfs dfs -ls /user/root
Found 1 items
drwxr-xr-x   - root supergroup          0 2017-05-14 09:37 /user/root/input

3.5.4 hdfs dfs -put

Usage: hdfs dfs -put … 
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system. 
hdfs dfs -put localfile /user/hadoop/hadoopfile 
hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir 
hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile 
hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin. 
Exit Code: 
Returns 0 on success and -1 on error.

灰常灰常感谢原博主的辛苦工作,为防止删博,所以转载,只供学习使用,不做其他任何商业用途。 https://blog.csdn.net/chengyuqiang/article/details/72082070

猜你喜欢

转载自blog.csdn.net/airufengye/article/details/80611073
今日推荐