Hadoop big data technology of MapReduce (3) - NLineInputFormat Use Case

3.1.8 NLineInputFormat use cases

1. demand
  • Each word were the number of statistics, the number of chips required to specify the number of rows output for each input file. This case requires each of the three rows into a slice.
    (1) input data
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang
banzhang ni hao
xihuan hadoop banzhang banzhang ni hao
xihuan hadoop banzhang

(2) the desired output data

Number of splits:4
2. demand analysis
  • Hadoop3.1.2 tested using the local input data, output data obtained
3. Code

(1) write Mapper class

/**
 * @Author zhangyong
 * @Date 2020/3/6 9:17
 * @Version 1.0
 */
public class NLineMapper extends Mapper<LongWritable, Text, Text, LongWritable> {
    private Text k = new Text ();
    private LongWritable v = new LongWritable (1);
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // 1 获取一行
        String line = value.toString ();
        // 2 切割
        String[] splited = line.split (" ");
        // 3 循环写出
        for (int i = 0; i < splited.length; i++) {
            k.set (splited[i]);
            context.write (k, v);
        }
    }
}

(2) write Reducer class

/**
 * @Author zhangyong
 * @Date 2020/3/6 9:18
 * @Version 1.0
 */
public class NLineReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
    LongWritable v = new LongWritable ();
    @Override
    protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
        long sum = 0L;
        // 1 汇总
        for (LongWritable value : values) {
            sum += value.get ();
        }
        v.set (sum);
        // 2 输出
        context.write (key, v);
    }
}

(3) write Driver class

/**
 * @Author zhangyong
 * @Date 2020/3/6 9:18
 * @Version 1.0
 */
public class NLineDriver {

    public static void main(String[] args) throws IOException, URISyntaxException, ClassNotFoundException, InterruptedException {

        // 数据输入路径和输出路径
        args = new String[2];
        args[0] = "src/main/resources/nlinei/";
        args[1] = "src/main/resources/nlineo";

        Configuration cfg = new Configuration();
        cfg.set("mapreduce.framework.name", "local");
        cfg.set("fs.defaultFS", "file:///");

        final FileSystem filesystem = FileSystem.get(cfg);
        if (filesystem.exists(new Path(args[1]))) {
            filesystem.delete(new Path(args[1]), true);
        }

        Job job = Job.getInstance (cfg);

        // 7设置每个切片InputSplit中划分三条记录
        NLineInputFormat.setNumLinesPerSplit (job, 3);

        // 8使用NLineInputFormat处理记录数
        job.setInputFormatClass (NLineInputFormat.class);

        // 2设置jar包位置,关联mapper和reducer
        job.setJarByClass (NLineDriver.class);
        job.setMapperClass (NLineMapper.class);
        job.setReducerClass (NLineReducer.class);

        // 3设置map输出kv类型
        job.setMapOutputKeyClass (Text.class);
        job.setMapOutputValueClass (LongWritable.class);

        // 4设置最终输出kv类型
        job.setOutputKeyClass (Text.class);
        job.setOutputValueClass (LongWritable.class);

        // 5设置输入输出数据路径
        FileInputFormat.setInputPaths (job, new Path (args[0]));
        FileOutputFormat.setOutputPath (job, new Path (args[1]));

        // 6提交job
        job.waitForCompletion (true);
    }
}
4. Test results and project structure

Here Insert Picture Description

Published 37 original articles · won praise 7 · views 1172

Guess you like

Origin blog.csdn.net/zy13765287861/article/details/104689409