Uso simple de hadoop MapReduce en Windows

Paquete de guía:

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <version>1.2.1</version>
        </dependency>

Reemplace org.apache.hadoop.fs.FileUtil.class en hadoop-core.1.2.1.jar con la herramienta de descompresión

Código de muestra


import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;

import org.apache.hadoop.mapred.lib.LongSumReducer;
import org.apache.hadoop.mapred.lib.TokenCountMapper;

import java.io.IOException;

public class WordCount {
    public static void main(String argv[]) throws IOException {

        JobClient client = new JobClient();
        JobConf conf = new JobConf(WordCount.class);


        FileInputFormat.addInputPath(conf,new Path("input"));
        FileOutputFormat.setOutputPath(conf,new Path("output"));


        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(LongWritable.class);
        conf.setMapperClass(TokenCountMapper.class); 
        conf.setCombinerClass(LongSumReducer.class); 
        conf.setReducerClass(LongSumReducer.class);


        client.setConf(conf);

        JobClient.runJob(conf);
    }

}

Final

Publicado 21 artículos originales · ganó 24 · vistas 20,000 +

Supongo que te gusta

Origin blog.csdn.net/qq_30332665/article/details/79371869
Recomendado
Clasificación