HADOOP传值方式

写MapReduce程序的时候通常需要向Map中传递参数,比如在Map中过滤数据时往往需要一个过滤列表,传入的参数即为一个过滤数据的集合。
    Hadoop参数传递有一种比较简单的方法,即使用Configuration的set()和get()方法:
void Configuration.set(String key, String value)
String Configuration.get(String key)
    这种简单的方法的缺陷是,传递的值必须是String类型,具有一定的局限性。
    还有一种可以传递对象参数的方法,需要让对象继承让这个对象实现Writable接口,使它具有序列化的能力,然后使用 org.apache.hadoop.io.DefaultStringifier的store(conf,  obj, keyname)和load(conf, keyname, itemclass)静态方法设置和获取这个对象。他的主要思想就是将这个对象序列化成一个字节数组后,用Base64编码成一个字符串,然后传递给 conf, 解析的时候与之类似。Writable接口提供两个方法,write和readFields,分别用来序列化和反序列化。这种方法实现比较复杂,一些结构复杂的对象(如Fileds中包括Set集合等)则很难实现write和readFields方法。
    因此,笔者采用以下方法传递参数:首先将对象进行序列化(采用Java原生的序列化方法),然后将序列化后的对象转成String类型,使用Configuration进行set。恢复时,将对象进行反序列化(也是Java原生的反序列化方法)即可,示例如下:
对象Ad:
public class Ad implements Serializable {
         private long adId;
         private Set<Long> productKeywordsSet;
         private Set<Long> otherKeywordsSet;
         …
}
对象AdList:
public class AdList implements Serializable {
         private List<Ad> adList;
         …
}
现需要传递AdList对象,配置JobConf的代码如下:
JobConf conf = new JobConf(Hadoop.class);
AdList adList = new AdList();
ByteArrayOutputStream bout = new ByteArrayOutputStream();
ObjectOutputStream out = new ObjectOutputStream(bout);
out.writeObject(adList);
out.flush();
out.close();
String s = bout.toString("ISO-8859-1");
s = URLEncoder.encode(s, "UTF-8");
conf.set("conf", s);
在Mapper端获取AdList对象如下:
public static class HadoopMap extends MapReduceBase
implementsMapper<LongWritable, Text, Text, LongWritable>, JobConfigurable{

    private static AdList adList = null;

    public void configure(JobConf conf){
                  String s = conf.get("conf");
                   try {
                            s = URLDecoder.decode(s, "UTF-8");
                            ObjectInputStream in =
new ObjectInputStream(new ByteArrayInputStream(s.getBytes("ISO-8859-1")));
                            adList = (AdList)in.readObject();
                            in.close();
                   } catch (Exception e) {
                            e.printStackTrace();
                   }
    }

    public void map(LongWritable key, Text value, OutputCollector<Text, LongWritable> output, Reporter reporter) throws IOException{
    …
    }
}



    Configuration的get与set方法对于传递的参数有内存的限制,若传递的参数内存过大,则任务会运行失败。若要传递一个很大的数据(比如一个词典文件),则可以使用一个分布式Cache的方法。首先需要向hdfs文件系统中上传(put)一个分布式文件,然后在Configure中读取文件,示例如下:
    字典文件dictory.list格式如下:
apple
bag
cat

    配置JobConf的代码如下:
JobConf conf = new JobConf(Hadoop.class);
DistributedCache.addCacheFile(new Path(“dictory.list”).toUri(), conf);
    在Mapper端读取文件如下:
public static class HadoopMap extends MapReduceBase
implementsMapper<LongWritable, Text, Text, LongWritable>, JobConfigurable{

        private Set<String> dictSet = new HashSet<String>();

        public void configure(JobConf conf) {
                Path[] pathwaysFiles;
                try {
                       pathwaysFiles = DistributedCache.getLocalCacheFiles(conf);
                       for (Path path: pathwaysFiles) {
                              BufferedReader fis = new BufferedReader(new FileReader(path.toString()));
                              String line = "";
                              while ((line = fis.readLine()) != null) {
                                      dictSet.add(line);
                              }
                      }
                 } catch (Exception e) {
                         e.printStackTrace();
                 }
        }

        public void map(LongWritable key, Text value, OutputCollector<Text, LongWritable> output, Reporter reporter) throws IOException{
                …
        }
}

猜你喜欢

转载自eastancient.iteye.com/blog/1961125