三、MapReduce Java API 应用
MapReduce 开发流程
(1)搭建开发环境,参考 HDFS 环境搭建,基本一致
(2)基于 MapReduce 框架编写代码
(3)编译打包,将源代码和依赖 jar 包打成一个包
(4)上传至运行环境
运行 hadoop jar 命令,现已由 yarn jar 替代,建议使用新命令提交执行
WordCount代码实现
Map类编写
Mapper:是 MapReduce 计算框架中 Map 过程的封装
Text:Hadoop 对 Java String 类的封装,适用于 Hadoop 对文本字符串的处理
IntWritable:Hadoop 对 Java Integer 类的封装,适用于 Hadoop 整型的处理
Context:Hadoop 环境基于上下文的操作对象,如 Map 中 key/value 的输出、分布式缓存数据、分布式参数传递等
StringTokenizer:对 String 对象字符串的操作类,做基于空白字符的切分操作工具类
package com.tianliangedu.mapper;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class MyTokenizerMapper extends
Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
Reduce 类编写
Reducer:是 MapReduce 计算框架中 Reduce 过程的封装
package com.tianliangedu.reducer;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
Driver 类编写
Configuration:与 HDFS 中的 Configuration 一致,负责参数的加载和传递
Job:作业,是对一轮 MapReduce 任务的抽象,即一个 MapReduce 的执行全过程的管理类
FileInputFormat:指定输入数据的工具类,用于指定任务的输入数据路径
FileOutputFormat:指定输出数据的工具类,用于指定任务的输出数据路径
package com.tianliangedu.driver;
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import com.tianliangedu.mapper.MyTokenizerMapper; import com.tianliangedu.reducer.IntSumReducer; public class WordCount {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "天亮 WordCount");
job.setJarByClass(WordCount.class);
job.setMapperClass(MyTokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}