[Big Data] Common commands for Hadoop file operations

There are some frequently used commands under Hadoop, through which it is very convenient to operate files on Hadoop.

0. View help
hadoop  -h
1. View the contents of the specified folder
hadoop fs -ls 文件文件夹
2. Open an existing file
hadoop fs -cat 文件地址 [ | more]
# []表示的是可选命令
3. Save local files to Hadoop
hadoop fs -put 本地文件地址 Hadoop文件夹
4. Download the files on Hadoop to a local folder
hadoop fs -get Hadoop文件夹 本地文件文件夹
5. Delete the specified file on Hadoop
hadoop fs -rm Hadoop文件地址
6. Delete the specified folder on Hadoop
hadoop fs -rm -r Hadoop文件文件夹
7. Create a new empty folder under the Hadoop specified folder
hadoop fs -mkdir Hadoop文件夹
8. Create a new empty file in the Hadoop specified folder
hadoop fs -touchz Hadoop文件
9. Rename a file on Hadoop
hadoop fs -mv Hadoop原文件地址 Hadoop新文件地址
10. Kill the executing Hadoop job
hadoop job -kill job-id

Guess you like

Origin blog.csdn.net/qq_45277554/article/details/130889134