hadoop---MapReduce核心概念

MapReduce :分部署计算框架

MapReduce Map和Reduce阶段

  1)将作业拆分称Map阶段和Reduce阶段
  2)Map阶段:Map Tasks
  3)Reduce阶段:Reduce Tasks

MapReduce 执行步骤:

   1)准备Map处理的输入数据。
   2)Mapper处理
   3)shuffle:将一定规则的key分到一个reduce处理。
   4)Reduce处理
   5)结果输出

(input) <k1, v1> -> map -> <k2, v2> -> combine -> <k2, v2> -> reduce -> <k3, v3> (output)

MapReduce流程图

  1. InputFormat(抽象类)读取文件系统比如:FileInputFormat(抽象类),实现类有TextFileInputFormat

    将我们的输入数据进行分片(split):  InputSplit[] getSplits(JobConf job, int numSplits) throws IOException;
    TextInputFormat: 处理文本格式的数据
    
  2. InputFormat包含两个方法:Split和RecordReader(RR)

     Split(与block对比):交由MapReduce作业来处理的数据块,是MapReduce中最小的计算单元
            HDFS:blocksize 是HDFS中最小的存储单元  128M
            默认情况下:他们两是一一对应的,当然我们也可以手工设置他们之间的关系(不建议)
    
  3. Partitioner(shuffle过程):把相同的key放入一个reduce里面。

  4. OutputFormat输出结果,实现类有TextFileOutputFormat

计算图

 split和Map task 一一对应
 shuffle将相同的key的值group by到同一个Reduce Task
 Reduce Task的个数和最后输出文件的个数相同

Java API来开发一个MapReduce作业

创建Mapper类

package com.lu.hadoop.mapreduce.map;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class MyMapper extends Mapper<LongWritable, Text, Text, LongWritable> {

    private LongWritable one = new LongWritable(1);

    /**
     * key是偏移量
     * value 是具体数据
     * */
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

        //接收到的每一行数据
        String line = value.toString();

        //按照指定分割符进行拆分
        String[] words = line.split(" ");

        for(String word : words){
            context.write(new Text(word),one);
        }
    }

创建Reducer类

package com.lu.hadoop.mapreduce.reduce;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

/**
 * 归并操作
 */
public class MyReducer extends Reducer<Text,LongWritable,Text,LongWritable> {

@Override
protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {

    long sum = 0;
    for(LongWritable value : values){
        //求单词key出现的总和
        sum += value.get();
    }

    //最终统计结果的输出
    context.write(key,new LongWritable(sum));
 }
}

创建WordCountApp类

package com.lu.hadoop.mapreduce.Apps;

import com.lu.hadoop.mapreduce.map.MyMapper;
import com.lu.hadoop.mapreduce.reduce.MyReducer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;


/**
 * 使用MapReduce开发WordCount应用程序
 */
public class WordCountApp {

/**
 * 定义Driver:封装了MapReduc作业信息
 */
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

    //创建Configuration
    Configuration configuration = new Configuration();

    //准备清理已存在的输出目录
    Path path = new Path(args[1]);
    FileSystem fileSystem = FileSystem.get(configuration);
    if(fileSystem.exists(path)){
        fileSystem.delete(path,true);
        System.out.println("output file exist, but is has deleted");
    }

    //创建Job
    Job job = Job.getInstance(configuration,"wordcount");

    //设计job的处理类
    job.setJarByClass(WordCountApp.class);

    //设置作业处理的输入路径
    FileInputFormat.setInputPaths(job,new Path(args[0]));

    //设置map相关参数
    job.setMapperClass(MyMapper.class);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(LongWritable.class);

    //设置reducer相关参数
    job.setReducerClass(MyReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(LongWritable.class);

    //设置作业处理的输出参数
    FileOutputFormat.setOutputPath(job,new Path(args[1]));

    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

将上述代码编译后,打包jar文件,上传服务器:
执行jar文件:

hadoop jar hadooptrain1 com.lu.hadoop.mapreduce.Apps.wordcountApp hdfs://hadoop1:8020/wc/wordcount hdfs://hadoop1:8020/output/wc1

---------------------------华丽丽的分割线--------over---------------------------------

猜你喜欢

转载自blog.csdn.net/u012133048/article/details/81193673
今日推荐