摘要:
GroupingComparator是在reduce阶段分组来使用的,由于reduce阶段,如果key相同的一组,只取第一个key作为key,迭代所有的values。 如果reduce的key是自定义的bean,我们只需要bean里面的某个属性相同就认为这样的key是相同的,这是我们就需要之定义GroupCoparator来“欺骗”reduce了。 我们需要理清楚的还有map阶段你的几个自定义: parttioner中的getPartition()这个是map阶段自定义分区, bean中定义CopmareTo()是在溢出和merge时用来来排序的。
demo数据:
订单id 金额 产品名称
order_234578,4789,笔记本
order_123456,7789,笔记本
order_123456,1789,手机
order_234578,4789,手机
order_123456,3789,笔记本
order_00001,4789,笔记本
order_00002,7789,笔记本
order_00001,5789,洗衣机
order_00002,17789,服务器
根据上面的订单信息需要求出每一个订单中成交金额最大的一笔交易。
设计思路:
1、利用“订单id和金额”作为key,可以将map阶段读取到的所有订单数据按照id分区,按照金额排序,发送到reduce
2、在reduce端利用groupingcomparator将订单id相同的kv聚合成组,然后取第一个即是最大值
groupingcomparator代码:
package com.zsy.mr.groupingcomparator;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
public class ItemIdGroupingComparator extends WritableComparator {
protected ItemIdGroupingComparator() {
super(OrderBean.class,true);
}
@SuppressWarnings("rawtypes")
@Override
public int compare(WritableComparable a, WritableComparable b) {
OrderBean aBean = (OrderBean)a;
OrderBean bOrderBean = (OrderBean)b;
return aBean.getItemId().compareTo(bOrderBean.getItemId());
}
}
Partitioner代码:
package com.zsy.mr.groupingcomparator;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Partitioner;
public class ItemIdPartitioner extends Partitioner<OrderBean, NullWritable> {
//相同的id会发往相同的partitioner,产生的分区数是根据用户设置的reducetask数保持一致,即numReduceTasks数是用户在设置的数字
@Override
public int getPartition(OrderBean key, NullWritable value, int numReduceTasks) {
return (key.getItemId().hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
}
OrderBean代码:
package com.zsy.mr.groupingcomparator;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
public class OrderBean implements WritableComparable<OrderBean> {
private String itemId;
private String productName;
private Float price;
@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(itemId);
out.writeUTF(productName);
out.writeFloat(price);
}
@Override
public void readFields(DataInput in) throws IOException {
this.itemId = in.readUTF();
this.productName = in.readUTF();
this.price = in.readFloat();
}
@Override
public int compareTo(OrderBean o) {
// 如果订单号相同,在进行价格比较
int result = this.itemId.compareTo(o.getItemId());
if (result == 0) {
result = -this.price.compareTo(o.price);
}
return result;
}
public String getItemId() {
return itemId;
}
public void setItemId(String itemId) {
this.itemId = itemId;
}
public String getProductName() {
return productName;
}
public void setProductName(String productName) {
this.productName = productName;
}
public float getPrice() {
return price;
}
public void setPrice(float price) {
this.price = price;
}
@Override
public String toString() {
return "itemId=" + itemId + ", productName=" + productName + ", price=" + price;
}
}
GroupingCommparatorSort代码:
package com.zsy.mr.groupingcomparator;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import com.zsy.mr.groupingcomparator.GroupingCommparatorSort.GroupingCommparatorSortMapper.GroupingCommparatorSortReducer;
public class GroupingCommparatorSort {
static class GroupingCommparatorSortMapper extends Mapper<LongWritable, Text, OrderBean, NullWritable> {
OrderBean orderBean = new OrderBean();
@Override
protected void map(LongWritable key, Text value,
Mapper<LongWritable, Text, OrderBean, NullWritable>.Context context)
throws IOException, InterruptedException {
String[] str = value.toString().split(",");
orderBean.setItemId(str[0]);
orderBean.setPrice(Float.parseFloat(str[1]));
orderBean.setProductName(str[2]);
context.write(orderBean, NullWritable.get());
}
static class GroupingCommparatorSortReducer extends Reducer<OrderBean, NullWritable, OrderBean, NullWritable> {
@Override
protected void reduce(OrderBean arg0, Iterable<NullWritable> arg1,
Reducer<OrderBean, NullWritable, OrderBean, NullWritable>.Context context)
throws IOException, InterruptedException {
context.write(arg0, NullWritable.get());
}
}
}
/**
* main:(这里用一句话描述这个方法的作用).
*
* @author zhaoshouyun
* @param args
* @since 1.0
*/
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
/*
* conf.set("mapreduce.framework.name", "yarn");
* conf.set("yarn.resoucemanger.hostname", "hadoop01");
*/
Job job = Job.getInstance(conf);
job.setJarByClass(GroupingCommparatorSort.class);
// 指定本业务job要使用的业务类
job.setMapperClass(GroupingCommparatorSortMapper.class);
job.setReducerClass(GroupingCommparatorSortReducer.class);
// 指定mapper输出的k v类型 如果map的输出和reduce的输出一样,只需要设置输出即可
// job.setMapOutputKeyClass(Text.class);
// job.setMapOutputValueClass(FlowBean.class);
// 指定最终输出kv类型(reduce输出类型)
job.setOutputKeyClass(OrderBean.class);
job.setOutputValueClass(NullWritable.class);
// 指定job的输入文件所在目录
FileInputFormat.setInputPaths(job, new Path(args[0]));
// 指定job的输出结果目录
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// 设置setGroupingComparatorClass
job.setGroupingComparatorClass(ItemIdGroupingComparator.class);
// 设置自定义的setPartitionerClass
job.setPartitionerClass(ItemIdPartitioner.class);
// 设置reducetask任务数为2
job.setNumReduceTasks(2);
// 将job中配置的相关参数,以及job所有的java类所在 的jar包,提交给yarn去运行
// job.submit();无结果返回,建议不使用它
boolean res = job.waitForCompletion(true);
System.exit(res ? 0 : 1);
}
}
运行结果-part-00000:
运行结果-part-00001: