hadoop的一些使用方法

命令行查看当前正在执行的job id:
[hadoop@compute-63-9 ~]$ /hadoop/hadoop_home/bin/hadoop job -jt compute-63-0:9001 -list all |awk '{ if($2==1) print $1 }'
job_201203311041_0041



设置副本数目
hadoop fs -setrep [-R] [-w] <副本個數> <HDFS檔案名稱>





设置map输出压缩:
 
  conf.set("mapred.compress.map.output", "true")
  conf.set("mapred.output.compression.type", "BLOCK"); 
  conf.set("mapred.map.output.compression.codec", "org.apache.hadoop.io.compress.GzipCodec");


存储写满了。
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for taskTracker/jobcache/job_201108311619_0703/attempt_201108311619_0703_m_000076_0/output/spill0.out

Error: java.io.IOException: No space left on device

java.io.IOException: Task: attempt_201108311619_0703_r_000002_0 - The reduce copier failed


hadoop目的地启动distcp数据传输:
hadoop distcp hdfs://172.30.4.50:9000/user/hadoop/lisk/mouse/str/ hdfs://hs14:9000/user/hadoop/gusc/new_contig
或者
hadoop distcp hdfs://172.30.4.50:9000/user/hadoop/lisk/mouse/id /user/hadoop/gusc


Map与Reduce之间的格式要注意,如果没写Map的输出格式,则默认按照Reduce的处理。如果Map和Reduce直接数据格式不一致,则需要指明:
job.setMapOutputKeyClass(Class<?> theClass)
job.setMapOutputValueClass(Class<?> theClass)
job.setOutputKeyClass(Class<?> theClass)
job.setOutputValueClass(Class<?> theClass)


Reducer的类型没对应上有时候并不会出错,得仔细检查。如下所示,这时候会调用默认的reducer来执行。
  public static class Reduce extends Reducer<LongWritable, Text, Text, Text> {
    public void reduce(Text key, Iterable<Text> values, Context context)
        throws IOException, InterruptedException {
        }
  }

猜你喜欢

转载自gushengchang.iteye.com/blog/1172744