The basic operation of the command hdfs
In hdfs, the path need to use absolute paths
1. Review the root directory
hadoop fs -ls /
2. recursive view all files and folders -lsr equivalent to -ls -R
hadoop fs -lsr /
3. Create a folder
hadoop fs -mkidr /hello
4. Create a multi-level folder
hadoop fs -mkdir -p /good/good
5. Create a file
hadoop fs -touchz /hello/test.txt
6. move or rename a file, is not present when hello1 rename, move otherwise hello -> hello1
hadoop fs -mv /hello /hello1
7. Copy to copy the folder hello1 to wo
hadoop fs -cp /hello1 /wo
8. Statistics size of each file under the directory wo
hadoop fs -du /wo
9. Statistical wo directory file [Folder] number
hadoop fs -count /wo
10. Review the contents of the file
hadoop fs -cat /hello/test.txt
hadoop fs -text /hello/test.txt
11. upload a file, the file upload the current file to the root directory -copyFromLocal the same -put
hadoop fs -put hadoop-2.7.7.tar.gz /
12. Download
hadoop fs -get /hadoop-2.7.7.tar.gz /opt/test
13. Delete, -rmr equivalent to -rm -r
hadoop fs -rm /hadoop-2.7.7.tar.gz #删除文件
hadoop fs -rmr /wo/hello2 #删除文件或文件夹
hadoop fs -rm -r -f /other #强制删除
14. View space
hadoop fs -df /