bin/hadoop fs -put /usr/local/product/hbase-0.94.6/hbase-0.94.6.jar libs/hbase/hbase-0.94.6.jar
hdfs删除文件命令:
hadoop fs -rm libs/hbase
用户日志存在于logs/userlogs/job_名称下面,每个map任务会有一个syslog文件。用户的log4j日志就在syslog存放。
关于hadoop中的classpath:
修改hadoop-env.sh中export HADOOP_CLASSPATH一行,修改后无须重启hadoop.
map-task超时时间
mapred.task.timeout
hadoop中每个节点map和reduce个数的设置,map和reduce的并发数就等于CPU的总核数
<property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.tasktracker.reduce.tasks.maximum</name> <value>4</value> </property>
hadoop报错:
WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Cannot roll edit log, edits.new files already exists
修复办法:
2、修改namenode的hdfs-site.xml文件中的内容:
<property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:50090</value> <description> The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property>
将上述的0.0.0.0修改为你部署secondarynamenode的主机名就OK了!
hadoop Java heap space解决办法,修改mapred-site.xml,加入
<property> <name>mapred.child.java.opts</name> <value>-Xmx1024m</value> </property>
然后启动任务,在task节点上通过ps -ef|grep jobCache查看进程xmx参数是否改变
mapr-site.xml内容:
<configuration> <property> <name>mapred.job.tracker</name> <value>v005:9001</value> </property> <property> <name>mapred.task.timeout</name> <value>8640000000</value> </property> </configuration>
WARNING : There are about 1 missing blocks. Please check the log or run fsck.
解决方案:
$ bin/hadoop fsck /
/home/zhaozheng/hdfs/README.txt: CORRUPT block blk_4085337189286784361
/home/zhaozheng/hdfs/README.txt: MISSING 1 blocks of total size 1366 B.Status: CORRUPT
Total size: 1366 B
Total dirs: 0
Total files: 1
Total blocks (validated): 1 (avg. block size 1366 B)
********************************
CORRUPT FILES: 1
MISSING BLOCKS: 1
MISSING SIZE: 1366 B
CORRUPT BLOCKS: 1
********************************
Minimally replicated blocks: 0 (0.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 0.0
Corrupt blocks: 1
Missing replicas: 0
Number of data-nodes: 2
Number of racks: 1
$bin/hadoop dfs -rm /home/zhaozheng/hdfs/README.txt
$bin/hadoop fsck /
.Status: HEALTHY
Total size: 4 B
Total dirs: 12
Total files: 1
Total blocks (validated): 1 (avg. block size 4 B)
Minimally replicated blocks: 1 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 2
Number of racks: 1