MapReduce之Join操作(2)

    上一篇介绍了 Repartition Join 的基本思想,实践出真知,具体的实现中总是存在各种细节问题。下面我们通过具体的源码分析来加深理解。本文分析的是 Hadoop-0.20.2 版本的 datajoin 代码,其它版本也许会有变化,这里暂且不论。

参看源码目录下,共实现有 7 个类,分别是:

  • ArrayListBackIterator.java
  • DataJoinJob.java
  • DataJoinMapperBase.java
  • DataJoinReducerBase.java
  • JobBase.java
  • ResetableIterator.java
  • TaggedMapOutput.java

        源码比较简单,代码量小,下面对一些关键的地方进行分析:前面我们提到了 map 阶段的输出的 key 值的设定;然而在实现中,其value值也是另外一个需要考虑的地方,在不同的 reduce 结点进行 join 操作时,需要知道参与 join 的元组所属的表;解决方法是在 map 输出的 value 值中加入一个标记 (tag) ,例如上一篇例子中两表的 tag 可以分别 customer order (注:实际上,在reduce阶段可以直接分析两元组的结构就可以确定数据来源)。这也是 TaggedMapOutput.java 的来历。作为 Hadoop 的中间数据,必须实现 Writable 的方法,如下所示:

public abstract class TaggedMapOutput implements Writable {
    protected Text tag;
    public TaggedMapOutput() {
        this.tag = new Text("");
    }
    public Text getTag() {
        return tag;
    }
    public void setTag(Text tag) {
        this.tag = tag;
    }
    public abstract Writable getData();  
    public TaggedMapOutput clone(JobConf job) {
        return (TaggedMapOutput) WritableUtils.clone(this, job);
    }
} 

接下来,我们看看 DataJoinMapperBase 中的相关方法

protected abstract TaggedMapOutput generateTaggedMapOutput(Object value);
protected abstract Text generateGroupKey(TaggedMapOutput aRecord);

以上两个方法需要由子类实现。上一篇文章提到,将两个表的连接键作为 map 输出的 key 值,其中第二个方法所做的就是这件事,生成一个类型为 Text key ,不过这里是将它称作是 GroupKey 而已。因此 map 方法也就比较简单易懂了

public void map(Object key, Object value, OutputCollector output, 
                         Reporter reporter) throws IOException {
    if (this.reporter == null) {
        this.reporter = reporter;
    }
    addLongValue("totalCount", 1);
    TaggedMapOutput aRecord = generateTaggedMapOutput(value);
    if (aRecord == null) {
        addLongValue("discardedCount", 1);
        return;
    }
    Text groupKey = generateGroupKey(aRecord);
    if (groupKey == null) {
        addLongValue("nullGroupKeyCount", 1);
        return;
    }
    output.collect(groupKey, aRecord);
    addLongValue("collectedCount", 1);
}

说完了 map 操作,接下来就是 reduce 阶段的事情了。参看 DataJoinReducerBase 这个类,其中的 reduce 方法主要部分是:

public void reduce(Object key, Iterator values, 
                             OutputCollector output, Reporter reporter) throws IOException {

    if (this.reporter == null) {
        this.reporter = reporter;
    }

    SortedMap<Object, ResetableIterator> groups = regroup(key, values, reporter);

     Object[] tags = groups.keySet().toArray();

    ResetableIterator[] groupValues = new ResetableIterator[tags.length];

    for (int i = 0; i < tags.length; i++) {
        groupValues[i] = groups.get(tags[i]);
    }

    joinAndCollect(tags, groupValues, key, output, reporter);
    addLongValue("groupCount", 1);

    for (int i = 0; i < tags.length; i++) {
        groupValues[i].close();
    }
}

其中 groups 数组保存的是 tag 以及它们对应元组的 iterator 。例如 Customer ID 3 的数据块所在的 reduce 节点上, tags = {"Custmoers" , "Orders"}, groupValues 则对应 {"3,Jose Madriz,281-330-8004"} {"3,A,12.95,02-Jun-2008","3,D,25.02,22-Jan-2009"} iterator 。归根结底,关于两个元组的 join 操作放在

protected abstract TaggedMapOutput combine(Object[] tags, Object[] values);

该方法由子类实现。

下面附上 Hadoop in Action 》中提供的一种实现

public class DataJoin extends Confi gured implements Tool {
    
	public static class MapClass extends DataJoinMapperBase {
        protected Text generateInputTag(String inputFile) {
            String datasource = inputFile.split(“-”)[0];
            return new Text(datasource);
        }
        protected Text generateGroupKey(TaggedMapOutput aRecord) {
            String line = ((Text) aRecord.getData()).toString();
            String[] tokens = line.split(“,”);
            String groupKey = tokens[0];
            return new Text(groupKey);
        }
        protected TaggedMapOutput generateTaggedMapOutput(Object value) {
           TaggedWritable retv = new TaggedWritable((Text) value);
           retv.setTag(this.inputTag);
           return retv;
       }
    }
    
    public static class Reduce extends DataJoinReducerBase {
    	protected TaggedMapOutput combine(Object[] tags, Object[] values) {
    		if (tags.length < 2) return null;
    		String joinedStr = “”;
    		for (int i=0; i<values.length; i++) {
    			if (i > 0) joinedStr += “,”;
    			TaggedWritable tw = (TaggedWritable) values[i];
    			String line = ((Text) tw.getData()).toString();
    			String[] tokens = line.split(“,”, 2);
    			joinedStr += tokens[1];
    		}
    		TaggedWritable retv = new TaggedWritable(new Text(joinedStr));
    		retv.setTag((Text) tags[0]);
    		return retv;
    	}
    }
    
    public static class TaggedWritable extends TaggedMapOutput {
    	private Writable data;
    	public TaggedWritable(Writable data) {
    		this.tag = new Text(“”);
    		this.data = data;
    	}
    	public Writable getData() {
    		return data;
    	}
    	public void write(DataOutput out) throws IOException {
    		this.tag.write(out);
    		this.data.write(out);
    	}  	
    	public void readFields(DataInput in) throws IOException {
    		this.tag.readFields(in);
    		this.data.readFields(in);
    	}
    }
    
    public int run(String[] args) throws Exception {
		Confi guration conf = getConf();
		JobConf job = new JobConf(conf, DataJoin.class);
		Path in = new Path(args[0]);
		Path out = new Path(args[1]);
		FileInputFormat.setInputPaths(job, in);
		FileOutputFormat.setOutputPath(job, out);
		job.setJobName(“DataJoin”);
		job.setMapperClass(MapClass.class);
		job.setReducerClass(Reduce.class);
		job.setInputFormat(TextInputFormat.class);
		job.setOutputFormat(TextOutputFormat.class);
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(TaggedWritable.class);
		job.set(“mapred.textoutputformat.separator”, “,”);
		JobClient.runJob(job);
		return 0;
	}
    
    public static void main(String[] args) throws Exception {
		int res = ToolRunner.run(new Confi guration(),
		new DataJoin(),
		args);
		System.exit(res);
	}

} 
 

 

 

 

猜你喜欢

转载自bjyjtdj.iteye.com/blog/1453451
今日推荐