ma-hadoop脚本命令 hadoop-hadoop dfs-hdfs dfs区别

1 hadoop 命令:

[root@chinadaas01 ~]# hadoop
Usage: hadoop [--config confdir] COMMAND
       where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
 or
  CLASSNAME            run the class named CLASSNAME

常见的有:

hadoop job : 查看hadoop 任务   eg---> hadoop job -list all

Usage: JobClient <command> <args>
        [-submit <job-file>]
        [-status <job-id>]
        [-counter <job-id> <group-name> <counter-name>]
        [-kill <job-id>]
        [-set-priority <job-id> <priority>]. Valid values for priorities are: VERY_HIGH HIGH NORMAL LOW VERY_LOW
        [-events <job-id> <from-event-#> <#-of-events>]
        [-history <jobOutputDir>]
        [-list [all]]
        [-list-active-trackers]
        [-list-blacklisted-trackers]
        [-list-attempt-ids <job-id> <task-type> <task-state>]

        [-kill-task <task-id>]
        [-fail-task <task-id>]

hadoop version 查看安装的版本

[root@chinadaas01 ~]# hadoop version
Hadoop 2.0.0-transwarp
Subversion file:///root/wangb/hadoop/build/hadoop/rpm/BUILD/hadoop-2.0.0-transwarp/src/hadoop-common-project/hadoop-common -r Unknown
Compiled by root on Fri Oct 25 14:38:23 CST 2013
From source with checksum ec693572f265ae4d8b8c1f52a22e37f5
This command was run using /usr/lib/hadoop/hadoop-common-2.0.0-transwarp.jar

hadoop jar  运行hadoop jar程序:

[root@chinadaas01 ~]# hadoop jar
RunJar jarFile [mainClass] args...

hadoop dfsadmin  -report  -safemode(在重新启动并且等待datanode发送信息给namenode时 不接受客户端请求 只有在都弄好后才开启服务 这个不接客过程就是安全模式)

hadoop fsck -openforwrite-files    数据块越大 内存空间中映射表越小 查询起来就越快

hadoop fs  :  hadoop 操作文件系统的命令  等效于  hdfs dfs 命令

hdfs :   

[root@chinadaas01 ~]# hdfs
Usage: hdfs [--config confdir] COMMAND
       where COMMAND is one of:
  dfs                  run a filesystem command on the file systems supported in Hadoop.
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  journalnode          run the DFS journalnode
  zkfc                 run the ZK Failover Controller daemon
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  haadmin              run a DFS HA admin client
  fsck                 run a DFS filesystem checking utility
  balancer             run a cluster balancing utility
  jmxget               get JMX exported values from NameNode or DataNode.
  oiv                  apply the offline fsimage viewer to an fsimage
  oev                  apply the offline edits viewer to an edits file
  fetchdt              fetch a delegation token from the NameNode
  getconf              get config values from configuration
  groups               get the groups which users belong to
                                                Use -help to see options

能看到  hadoop命令和 hdfs命令是两套独立的体系, 

一个是集群层面的,比如查看版本,查看mr的Job任务,执行hdfs集群的文件相关命令

一个是hdfs集群层面的,比如namenode格式化  datanode命令等

这两个的关联相同点就是  hadoop dfs  =  hdfs dfs

猜你喜欢

转载自chengjianxiaoxue.iteye.com/blog/2257547