Hadoop之序列化

1.序列化概述

1.1 什么是序列化

  1. 序列化:就是把内存中的对象,转换成字节序列(或其他数据传输协议)以便于数据持久化到磁盘和网络传输.
  2. 反序列化:就是将接收到的字节序列(或其他数据传输协议)或是磁盘的持久化数据,转换成内存对象的过程.

1.2 为什么要序列化

一般来说,"活的"对象只能存在于内存中,关机断电就消失了.而且"活的"对象只能由本地的进程使用,不能被发送到网络上的另一台计算机.然而序列化可以存储"活的"对象,并将它发送到远程计算机.

1.3 为什么不使用Java的序列化(Serializable)

Java的序列化,是一个重量级序列化框架(Serializable),一个对象被序列化之后,会附带很多额外的信息(各种校验信息,Header,继承体系等),不便于在网络中高效传输.所以,Hadoop自己开发了一套序列化机制——Writable.

1.4 Hadoop序列化特点

  1. 紧凑:高效实用存储空间
  2. 快速:读写数据的额外开销小
  3. 可扩展:随着通信协议的升级而可升级
  4. 互操作:支持多语言的交互

2.自定义Bean对象,实现序列化接口(Writable)

企业开发中,往往需要用到自定义Bean对象,如果在Hadoop框架内部传递一个Bean对象,那么该对象就需要实现序列化接口.

  1. 必须实现Writable接口
  2. 反序列化时需要无参的构造函数,所以必须有无参构造器
  3. 重写序列化方法——write
  4. 重写反序列化方法——readFields
  5. 值得注意的是,反序列化的属性read顺序需要跟序列化的属性write顺序一致.
  6. 想要把结果显示在文件中,需要重写toString()方法,可以用"\t"分隔字段内容,方便后期使用.
  7. 如果自定义的Bean要作为Key,在MapReduce中传输,则Bean还需要Comparable接口,因为MapReduce框架中的Shuffle过程要求对key必须排序.

3.序列化样例

  1. 需求
    统计每个手机号的上行流量,下行流量,总流量.
  2. 数据格式:
id 手机号码 ip 上行流量 下行流量 网络状态码
1 13700009999 8.8.8.8 1000 3500 200
  1. 期望输出格式:
手机号码 上行流量 下行流量 总流量
13700009999 1000 3500 4500
  1. 示例代码

自定义Bean

package cstmbean;

import org.apache.hadoop.io.Writable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class FlowBean implements Writable {

	// 上行流量
    private Long upFlow;
    
    // 下行流量
    private Long downFlow;
    
    // 总流量
    private Long sumFlow;

    public FlowBean() {
        // 无参构造函数,如无任何显示的带参构造函数,则可省略
    }

    public void set(Long upFlow, Long downFlow) {
        this.upFlow = upFlow;
        this.downFlow = downFlow;
        this.sumFlow = upFlow + downFlow;
    }

    @Override
    public String toString() {
        return this.upFlow + "\t" + this.downFlow + "\t" + this.sumFlow;
    }

    public Long getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(Long upFlow) {
        this.upFlow = upFlow;
    }

    public Long getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(Long downFlow) {
        this.downFlow = downFlow;
    }

    public Long getSumFlow() {
        return sumFlow;
    }

    public void setSumFlow(Long sumFlow) {
        this.sumFlow = sumFlow;
    }

	// 序列化方法
	@Override
    public void write(DataOutput dataOutput) throws IOException {
        dataOutput.writeLong(this.upFlow);
        dataOutput.writeLong(this.downFlow);
        dataOutput.writeLong(this.sumFlow);
    }

	// 反序列化方法
	@Override
    public void readFields(DataInput dataInput) throws IOException {
        this.upFlow = dataInput.readLong();
        this.downFlow = dataInput.readLong();
        this.sumFlow = dataInput.readLong();
    }
}

Mapper

package cstmbean;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class FlowMapper extends Mapper<LongWritable, Text, Text, FlowBean> {
    private Text phone = new Text();
    private FlowBean bean = new FlowBean();
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] words = value.toString().split("\t");
        // 手机号码
        phone.set(words[1]);
        // 上行流量,倒数第三列
        Long upFlow = Long.parseLong(words[words.length - 3]);
        // 下行流量,倒数第二列
        Long downFlow = Long.parseLong(words[words.length - 2]);
        // 根据上、下行流量计算总流量
        bean.set(upFlow, downFlow);
        context.write(phone, bean);
    }
}

Reducer

package cstmbean;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class FlowReducer extends Reducer<Text, FlowBean, Text, FlowBean> {
    private Text phone = new Text();
    private FlowBean bean = new FlowBean();
    Long upFLow;
    Long downFlow ;
    @Override
    protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
        upFLow = 0L;
        downFlow = 0L;
        phone.set(key);
        for(FlowBean bean : values) {
            upFLow += bean.getUpFlow();
            downFlow += bean.getDownFlow();
        }
        bean.set(upFLow, downFlow);
        context.write(phone, bean);
    }
}

Driver

package cstmbean;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class FlowDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(FlowDriver.class);
        job.setMapperClass(FlowMapper.class);
        job.setReducerClass(FlowReducer.class);

        job.setMapOutputKeyClass(Text.class);
        // Mapper的输出Value为FlowBean类型
        job.setMapOutputValueClass(FlowBean.class);
        job.setOutputKeyClass(Text.class);
        // Reducer的输出Value为FlowBean类型
        job.setOutputValueClass(FlowBean.class);

        FileInputFormat.addInputPath(job, new Path("i:\\bean_input"));
        FileOutputFormat.setOutputPath(job, new Path("i:\\bean_output"));

        boolean rtn = job.waitForCompletion(true);
        System.exit(rtn ? 0 : 1);
    }
}
发布了62 篇原创文章 · 获赞 3 · 访问量 2万+

猜你喜欢

转载自blog.csdn.net/Leonardy/article/details/103871805