1. 案例
有以下数据,四列分别表示:手机号码、上行流量、下行流量、总流量,要求将四列数据封装为一个Bean对象,并能按照总流量进行从大到小排列输出TopN,且 Bean 不实现 WritableComparable 接口。
注意:复制粘贴时不要留有空行
13470253144 180 180 360
13509468723 7335 110349 117684
13560439638 918 4938 5856
13568436656 3597 25635 29232
13590439668 1116 954 2070
13630577991 6960 690 7650
13682846555 1938 2910 4848
13729199489 240 0 240
13736230513 2481 24681 27162
13768778790 120 120 240
13846544121 264 0 264
13956435636 132 1512 1644
13966251146 240 0 240
13975057813 11058 48243 59301
13992314666 3008 3720 6728
15043685818 3659 3538 7197
15910133277 3156 2936 6092
15959002129 1938 180 2118
18271575951 1527 2106 3633
18390173782 9531 2412 11943
84188413 4116 1432 5548
2. 思路以及代码实现
2.1 FlowSortbean
首先将上述信息封装为一个 FlowSortbean 类,并实现序列化接口Writable代码如下:
package com.hadooptest.rawcomparatortest;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class FlowSortbean implements Writable {
private String phoneNum;
private Long upFlow;
private Long downFlow;
private Long sumFlow;
public String getPhoneNum() {
return phoneNum;
}
public void setPhoneNum(String phoneNum) {
this.phoneNum = phoneNum;
}
public Long getUpFlow() {
return upFlow;
}
public void setUpFlow(Long upFlow) {
this.upFlow = upFlow;
}
public Long getDownFlow() {
return downFlow;
}
public void setDownFlow(Long downFlow) {
this.downFlow = downFlow;
}
public Long getSumFlow() {
return sumFlow;
}
public void setSumFlow(Long sumFlow) {
this.sumFlow = sumFlow;
}
@Override
public String toString() {
return phoneNum + "\t" + upFlow + "\t" + downFlow + "\t" + sumFlow;
}
@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(phoneNum);
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
@Override
public void readFields(DataInput in) throws IOException {
phoneNum=in.readUTF();
upFlow=in.readLong();
downFlow=in.readLong();
sumFlow=in.readLong();
}
}
2.2 MyComparator(敲黑板~)
考虑到需要对其进行排序,所以需要进入 reduce 环节,且 FlowSortbean 类应作为 key 进行传递。但由于目前还没有定义相应的 compare 方法,如果直接将其作为 key 运行,会报类型转换异常,见 4. 异常处理。
自定义一个比较器 MyComparator 实现 RawComparator 接口,重写其中的两个 compare 方法即可。笔者对两个 compare 方法的理解详见代码内注释。
MyComparator 的代码如下:
package com.hadooptest.rawcomparatortest;
import org.apache.hadoop.io.DataInputBuffer;
import org.apache.hadoop.io.RawComparator;
import java.io.IOException;
public class MyComparator implements RawComparator<FlowSortbean> {
private final DataInputBuffer buffer = new DataInputBuffer();
FlowSortbean key1 = new FlowSortbean();
FlowSortbean key2 = new FlowSortbean();
// 此方法为 RawComparator 中定义的方法,形参为序列化数据,其意义在于将 map 阶段处理之后的数据反序列化。
@Override
public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
try {
// parse key1
buffer.reset(b1, s1, l1);
key1.readFields(buffer);
// parse key2
buffer.reset(b2, s2, l2);
key2.readFields(buffer);
// clean up reference
buffer.reset(null, 0, 0);
} catch (IOException e) {
throw new RuntimeException(e);
}
return compare(key1, key2);
}
// 此方法为 RawComparator 的父类 Comparator 的 compare 方法,形参为反序列化之后的对象。由于在进行比较之前首先需要对数据进行反序列化,因此不能直接调用 Comparator 的 compare 方法。
@Override
public int compare(FlowSortbean o1, FlowSortbean o2) {
// return 0;
return - o1.getSumFlow().compareTo(o2.getSumFlow());
}
}
2.3 FlowcountSortMapper
接下来的过程相信不用多说了,就像其他常规案例一样,定义相应的 Mapper 类和 Reducer 类。FlowcountSortMapper 用于将输入的文本信息封装为 FlowSortbean 类,话不多说,直接上代码:
package com.hadooptest.rawcomparatortest;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class FlowcountSortMapper extends Mapper<LongWritable, Text, FlowSortbean, NullWritable> {
FlowSortbean key_out = new FlowSortbean();
NullWritable value_out = NullWritable.get();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] words = value.toString().split("\t");
key_out.setPhoneNum(words[0]);
key_out.setUpFlow(Long.parseLong(words[1]));
key_out.setDownFlow(Long.parseLong(words[2]));
key_out.setSumFlow(Long.parseLong(words[3]));
context.write(key_out,value_out);
}
}
2.4 FlowcountSortReducer
由于本案例仅为了展示自定义比较器实现RawComparator接口的效果,没有其他的业务逻辑,因此可以不重写 reduce 方法,直接调用父类 Reducer 的 reduce 方法,将 map 的输出的 key 和 value 直接输出。
package com.hadooptest.rawcomparatortest;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Reducer;
public class FlowcountSortReducer extends Reducer<FlowSortbean, NullWritable, FlowSortbean, NullWritable> {
}
父类 Reducer 的 reduce 方法源码如下:
/**
* This method is called once for each key. Most applications will define
* their reduce class by overriding this method. The default implementation
* is an identity function.
*/
@SuppressWarnings("unchecked")
protected void reduce(KEYIN key, Iterable<VALUEIN> values, Context context
) throws IOException, InterruptedException {
for(VALUEIN value: values) {
context.write((KEYOUT) key, (VALUEOUT) value);
}
}
2.5 FlowcountSortDriver
由于我们自定义了比较器,所以需要在 Driver 类中定义 job 所需的比较器类型为 MyComparator。其他需要注意的地方见代码内的注释。
package com.hadooptest.rawcomparatortest;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class FlowcountSortDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Path inputPath=new Path("e:/phone_data2.txt");
Path outputPath=new Path("e:/output13");
Configuration conf = new Configuration();
//保证输出目录不存在
FileSystem fs=FileSystem.get(conf);
if (fs.exists(outputPath)) {
fs.delete(outputPath, true);
}
Job job = Job.getInstance(conf);
job.setJarByClass(FlowcountSortDriver.class);
// 设置Job运行的Mapper,Reducer类型,Mapper,Reducer输出的key-value类型
job.setMapperClass(FlowcountSortMapper.class);
job.setReducerClass(FlowcountSortReducer.class);
// 如果Mapper的输出类型和Reducer的输出类型一致,可以省略Mapper的输出类型设置
/* job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);*/
job.setOutputKeyClass(FlowSortbean.class);
job.setOutputValueClass(NullWritable.class);
// 设置输入目录和输出目录
FileInputFormat.setInputPaths(job, inputPath);
FileOutputFormat.setOutputPath(job, outputPath);
// 设置key的比较器
job.setSortComparatorClass(MyComparator.class);
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}
}
3. 运行结果
13509468723 7335 110349 117684
13975057813 11058 48243 59301
13568436656 3597 25635 29232
13736230513 2481 24681 27162
18390173782 9531 2412 11943
13630577991 6960 690 7650
15043685818 3659 3538 7197
13992314666 3008 3720 6728
15910133277 3156 2936 6092
13560439638 918 4938 5856
84188413 4116 1432 5548
13682846555 1938 2910 4848
18271575951 1527 2106 3633
15959002129 1938 180 2118
13590439668 1116 954 2070
13956435636 132 1512 1644
13470253144 180 180 360
13846544121 264 0 264
13729199489 240 0 240
13768778790 120 120 240
13966251146 240 0 240
4. 异常处理
如果将没有定义 compare 方法的 Bean 类作为 key 运行,就会会报类型转换异常。此时需要按照 2.2 之后的步骤自定义比较器,或由 Bean 直接实现 WritableComparable 接口,重写 WritableComparable 接口中的 compare 方法。
参考文章:
https://blog.csdn.net/qq_37714755/article/details/104623672