Hadoop into the client interface :( client can run anywhere)
hadoop-2.7.3 / bin / hadoop (or HDFS)
View of hdfs hadoop file system:
hadoop fs -ls / (equivalent hadoop fs -ls hdfs: // namenode hostname: 9000 / *)
(You can also open a browser http: // namenode: 50070 Utilities -> browse the file system file system view and download the file)
Upload a file to hadoop file system:
hadoop fs -put local file name /
Hdfs operation in the file:
hadoop fs - [action name] file name (with directory)
Such as reading hdfs files: hadoop fs -cat file name (with directory)
Download hdfs in: hadoop fs -get file name (with directory)
Use mapreduce features:
(1) create a directory in hdfs in: hadoop fs -mkdir (-p) / wordcount / input
(2) upload files into the directory: hadoop fs -put a.txt b.txt / wordcount / input
(3) into the jar package directory: cd /home/hadoop/apps/hadoop-2.6.4/share/hadoop/mapreduce
(4) Run Function: hadoop jar jar package name (.jar) stored input parameter program output data main class needs to run the main class catalog (must not exist)
Detailed hdfs common client commands:
-help Function: The output of this command Manual |
-Ls Function: Display directory information Example: Hadoop HDFS -ls FS: // Hadoop-Server01: 9000 / NOTE: These parameters, all hdfs paths can be abbreviated -> Hadoop -ls FS / equivalent performance on a command |
-mkdir Function: hdfs create a directory on Example: Hadoop FS -mkdir -p / AAA / BBB / CC / dd |
-moveFromLocal Function: Paste from the local cut to hdfs Example: Hadoop FS - moveFromLocal /home/hadoop/a.txt / AAA / BBB / CC / dd -moveToLocal Function: hdfs paste to the local cut Example: Hadoop FS - moveToLocal / AAA / BBB / CC / dd /home/hadoop/a.txt |
--appendToFile Function: Append a file to the end of the file already exists Example: Hadoop HDFS FS -appendToFile ./hello.txt: // Hadoop-Server01: 9000 / hello.txt (must exist) It can be abbreviated as: Hadoop fs -appendToFile ./hello.txt /hello.txt
|
-cat Function: display file contents Example: hadoop FS -cat /hello.txt
-tail Function: Display the end of a file 示例:hadoop fs -tail /weblog/access_log.1 -text 功能:以字符形式打印一个文件的内容 示例:hadoop fs -text /weblog/access_log.1 |
-chgrp -chmod -chown 功能:linux文件系统中的用法一样,对文件所属权限 示例: hadoop fs -chmod 666 /hello.txt hadoop fs -chown someuser:somegrp /hello.txt |
-copyFromLocal 功能:从本地文件系统中拷贝文件到hdfs路径去 示例:hadoop fs -copyFromLocal ./jdk.tar.gz /aaa/ -copyToLocal 功能:从hdfs拷贝到本地 示例:hadoop fs -copyToLocal /aaa/jdk.tar.gz |
-cp 功能:从hdfs的一个路径拷贝hdfs的另一个路径 示例: hadoop fs -cp /aaa/jdk.tar.gz /bbb/jdk.tar.gz.2
-mv 功能:在hdfs目录中移动文件 示例: hadoop fs -mv /aaa/jdk.tar.gz / |
-get 功能:等同于copyToLocal,就是从hdfs下载文件到本地 示例:hadoop fs -get /aaa/jdk.tar.gz -getmerge 功能:下载并合并多个文件 示例:比如hdfs的目录 /aaa/下有多个文件:log.1, log.2,log.3,... hadoop fs -getmerge /aaa/log.* ./log.sum |
-put 功能:等同于copyFromLocal 示例:hadoop fs -put /aaa/jdk.tar.gz /bbb/jdk.tar.gz.2
|
-rm 功能:删除文件或文件夹 示例:hadoop fs -rm -r /aaa/bbb/
-rmdir 功能:删除空目录 示例:hadoop fs -rmdir /aaa/bbb/ccc |
-df 功能:统计文件系统的可用空间信息 示例:hadoop fs -df -h /
-du 功能:统计文件夹的大小信息 示例: hadoop fs -du -s -h /aaa/*
|
-count 功能:统计一个指定目录下的文件节点数量 示例:hadoop fs -count /aaa/
|
-setrep 功能:设置hdfs中文件的副本数量 示例:hadoop fs -setrep 3 /aaa/jdk.tar.gz <这里设置的副本数只是记录在namenode的元数据中,是否真的会有这么多副本,还得看datanode的数量>
|