Hadoop Shell command dictionary

Reprinted from: https://www.aboutyun.com//forum.php/?mod=viewthread&tid=6983&extra=page%3D1&page=1&

Can be read with the following questions:

What is the difference between 1.chmod chown is?
2.cat the contents of the specified file output path to where?
3.cp can be copied between different?
How 4.hdfs view the file size?
How 5.hdfs merge files?
6. How to display all the folders and files in the current directory
reason 7.rm failed to delete the file what?
8. How to create time to view the file
9. Check the contents of the file command What? Can name three?
10. How to determine if a file exists?
11. How to create a 0 byte file

for the command, we remember one time, in the future they might have forgotten, when we used here, you can look at.


Call the file system (FS) Shell command should use bin / hadoop fs <args> form. All of the FS  shell command uses the URI path as a parameter. URI format scheme: // authority / path. For HDFS file system, scheme is hdfs, the local file system, scheme is file. Wherein the scheme and authority are optional, and if not added unspecified, the default configuration specified in the scheme. HDFS file or a directory such as / parent / child can be expressed as hdfs: // namenode: namenodeport / parent / child, or simpler / parent / child (assuming that the default value of your configuration file is namenode: namenodeport). And the corresponding behavior of the majority of Unix Shell commands FS Shell command is similar, except that each command can presentation noted that when using the details below. Error message is output to stderr, the output additional information to stdout.
(Stderr and stdout here can be understood as a file)


cat

Usage: hadoop FS - CAT  URI of the [URI of the ...]
The content path of the specified file output to stdout.
Example:
  • hadoop fs -cat hdfs://host1:port1/file1 hdfs://host2:port2/file2
  • hadoop fs -cat file:///file3 /user/hadoop/file4
Return Value:
returns 0 on success, -1 failure.


chgrp

使用方法:hadoop fs -chgrp [-R] GROUP URI [URI …] Change group association of files. With -R, make the change recursively through the directory structure. The user must be the owner of files, or else a super-user. Additional information is in the  Permissions User Guide. -->
Change the group the file belongs. Use -R will change the directory structure recursively. User commands must be the owner or super user files. For more information, see HDFS permissions User's Guide .


chmod

Usage: hadoop fs -chmod [-R] <MODE [, MODE] ... | OCTALMODE> URI [URI ...]
Change file permissions. Use -R will change the directory structure recursively. User commands must be the owner or super user files. For more information, see HDFS permissions User's Guide .


chown

Usage: hadoop fs -chown [-R] [OWNER] [: [GROUP]] URI [URI]
Change file owner. Use -R will change the directory structure recursively. The user must be a superuser command. For more information, see HDFS permissions User's Guide .


copyFromLocal

Usage: hadoop fs -copyFromLocal <localsrc> URI
In addition to defining an external source is a local file path, and is similar to the put command.


copyToLocal

Usage: hadoop fs -copyToLocal [-ignorecrc] [-crc] URI <localdst>
In addition to defining a target path is outside a local file, and the like get command.


cp

Usage: hadoop fs -cp URI [URI ...] <dest>
Copy the files from the source path to destination path. This command allows a plurality of source path, and the path must be a certain directory. 
Example:
  • hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2
  • hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
return value:
Returns 0 on success, failure to return -1.


of

Usage: hadoop fs -du URI [URI ...]
Show directory size of all files, or only when a specified file, display the size of this file.
Example:
Hadoop -du FS / User / Hadoop / dir1 / User / Hadoop / file1 HDFS: // Host: Port / User / Hadoop / dir1 
Return Value:
returns 0 on success, -1 failure. 


So

Usage: hadoop fs -dus <args>
The size of the file is displayed.


expunge

Usage: hadoop fs -expunge
Empty the Recycle Bin. Please refer to HDFS design documentation for more information about the Recycle Bin properties.


get

Usage: hadoop fs -get [-ignorecrc] [-crc] <src> <localdst> 
Copy the file to the local file system. -Ignorecrc options available to copy files CRC checksum failure. Use -crc option to copy files and CRC information.
Example:
  • hadoop fs -get /user/hadoop/file localfile
  • hadoop fs -get hdfs://host:port/user/hadoop/file localfile
return value:
Returns 0 on success, failure to return -1.


getmerge

Usage: hadoop fs -getmerge <src> <localdst> [addnl]
Receiving a source and a destination directory file as input, and the cost of connection to the target file in the source directory of all the files. addnl is optional, is used to specify to add a line break at the end of each file.


ls

Usage: hadoop fs -ls <args>
If the file, the file information returns the following format:
filename <number of copies> File size Date Modified Modified privileged user ID Group ID 
if it is a directory, a list of its immediate child file is returned, just like in Unix. Directory returns a list of the following information:
directory name <dir> change date and time modification privileged user ID group ID 
Example:
Hadoop -ls FS / User / Hadoop / file1 / User / Hadoop / file2 HDFS: // Host: Port / User / Hadoop / dir1 / nonexistentfile 
return value:
returns 0 on success, -1 failure. 


Lsr

Usage: hadoop fs -lsr <args> 
recursive version of the ls command. Ls -R Unix is similar.


mkdir

Usage: hadoop fs -mkdir <paths> 
Uri accept the path specified as a parameter, create these directories. Their behavior is similar to the Unix mkdir -p, it will create the path of the parent directory levels.
Example:
  • hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
  • hadoop fs -mkdir hdfs://host1:port1/user/hadoop/dir hdfs://host2:port2/user/hadoop/dir
return value:
Returns 0 on success, failure to return -1.


movefromLocal

Usage: dfs -moveFromLocal <src> <dst>
Output a "not implemented" message.


mv

Usage: hadoop fs -mv URI [URI ...] <dest>
Move files from the source path to the destination path. This command allows a plurality of source path, and the path must be a certain directory. Not allowed to move files between different file systems. 
Example:
  • hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2
  • hadoop fs -mv hdfs://host:port/file1 hdfs://host:port/file2 hdfs://host:port/file3 hdfs://host:port/dir1
return value:
Returns 0 on success, failure to return -1.


put

Usage: hadoop fs -put <localsrc> ... <dst>
Single or multiple copy source path to the destination file system from the local file system. Also supports read input from standard input and writes the target file system.
  • hadoop fs -put localfile /user/hadoop/hadoopfile
  • hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir
  • hadoop fs -put localfile hdfs://host:port/hadoop/hadoopfile
  • hadoop fs -put - hdfs: // host  : port / hadoop / hadoopfile
    read input from standard input.
return value:
Returns 0 on success, failure to return -1.


rm

Usage: hadoop fs -rm URI [URI ...]
Delete the specified file. Delete only non-empty directories and files. Please refer to rmr order to understand recursion deleted.
Example:
  • hadoop fs -rm hdfs://host:port/file /user/hadoop/emptydir
return value:
Returns 0 on success, failure to return -1.


rmr

Usage: hadoop fs -rmr URI [URI ...]
delete recursive version.
Example:
  • hadoop fs -rmr /user/hadoop/dir
  • hadoop fs -rmr hdfs://host:port/user/hadoop/dir
return value:
Returns 0 on success, failure to return -1.


setrep

Usage: hadoop fs -setrep [-R] <path>
Changing the replication factor of a file. -R option for recursive coefficient changing the copy of all the files in the directory.
Example:
  • hadoop fs -setrep -w 3 -R /user/hadoop/dir1
return value:
Returns 0 on success, failure to return -1.


stat

Usage: hadoop fs -stat URI [URI ...]
Return statistics for the specified path.
Example:
  • hadoop fs -stat path
Return Value:
returns 0 on success, -1 failure.


tail

Usage: hadoop fs -tail [-f] URI
The contents of the file tail 1K bytes of output to stdout. Unix support -f option behavior and consistent.
Example:
  • hadoop fs -tail pathname
Return Value:
returns 0 on success, -1 failure.


test

Usage: hadoop fs -test - [ezd] URI
Options:
-e to check whether a file exists. If there is 0 is returned.
-z check whether a file is 0 bytes. If it returns 0. 
-d If the path is a directory, returns 1, otherwise it returns 0.
Example:
  • hadoop fs -test -e filename


text

Usage: hadoop fs -text <src> 
The output of the source files to text format. The zip format is allowed and TextRecordInputStream.


touchz

Usage: hadoop fs -touchz URI [URI ...] 
Create a 0-byte empty files.
Example:
  • hadoop -touchz pathname
Return Value:
returns 0 on success, -1 failure.

 

recommended article

Getting hadoop: hadoop using the shell command summary

Guess you like

Origin www.cnblogs.com/momoyan/p/11616410.html