Wordcount -- MapReduce example -- Reducer

Reducer receives (key, values) pairs and aggregate values to a desired format, then write produced (key, value) pairs back into HDFS.
E.g.
Input: (term, [1, 1, 1, 1])
Output: (term, 4)

Reducer Class Prototype:

Reducer<Text, IntWritable, Text, IntWritable> 
// Text:: INPUT_KEY
// IntWritable:: INPUT_VALUE
// Text:: OUTPUT_KEY
// IntWritable:: OUTPUT_VALUE

Reduce Method for Mapper

Method header

public void reduce(Text key, Iterable<IntWritable> values,
                     Context context
                     ) throws IOException, InterruptedException 
// Text key:: Declare data type of input key;
// Iterable<IntWritable> values:: Declare data type of input values; (Note: Received values from mapper should be in a list)
// Context context:: Declare data type of output. Context is often used for output data collection.

Aggregate Values

// Iterate through all the values wrt the key:
int sum = 0;
for (IntWritable val : values) {
  sum += val.get();
}

Building (key, value) pairs

// Convert built-in int into IntWritable
result.set(sum);
// build (key, value) pair into Context and emit:
context.write(key, result);

Reducer Class Summary

Reducer class produces Reducer.Context object and serialize obtained (key, value) pair into HDFS.

Overview of Reducer Class

public static class IntSumReducer
     extends Reducer<Text,IntWritable,Text,IntWritable> {
  private IntWritable result = new IntWritable();

  public void reduce(Text key, Iterable<IntWritable> values,
                     Context context
                     ) throws IOException, InterruptedException {
    int sum = 0;
    for (IntWritable val : values) {
      sum += val.get();
    }
    result.set(sum);
    context.write(key, result);
  }
}

Written with StackEdit.

猜你喜欢

转载自www.cnblogs.com/LexLuc/p/9571033.html