1. The IDE supports Maven and creates the simplest Maven-quickstart type artifact.
2. Edit pom.xml to add spark support.
<dependency> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.4.3</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.1.0</version> </dependency>
3. Right-click project maven-clean, maven-install.
4. Add a Spark word segmentation code
package MavenDemo.SparkDemoSrc; /** * Hello world! * */ /** 4 * User: hadoop 5 * Date: 2014/10/10 0010 6 * Time: 19:26 7 */ import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import scala.Tuple2; import java.util.Arrays; import java.util.List; import java.util.regex.Pattern; public final class App { private static final Pattern SPACE = Pattern.compile(" "); public static void main(String[] args) throws Exception { if (args.length < 1) { System.err.println("Usage: JavaWordCount <file>"); System.exit(1); } SparkConf sparkConf = new SparkConf (). SetAppName ("JavaWordCount"); JavaSparkContext ctx = new JavaSparkContext(sparkConf); JavaRDD<String> lines = ctx.textFile(args[0], 1); JavaRDD<String> words = lines .flatMap(new FlatMapFunction<String, String>() { public Iterable<String> call(String s) { return Arrays.asList(SPACE.split(s)); } }); JavaPairRDD<String, Integer> ones = words .mapToPair(new PairFunction<String, String, Integer>() { public Tuple2<String, Integer> call(String s) { return new Tuple2<String, Integer>(s, 1); } }); JavaPairRDD<String, Integer> counts = ones .reduceByKey(new Function2<Integer, Integer, Integer>() { public Integer call(Integer i1, Integer i2) { return i1 + i2; } }); List<Tuple2<String, Integer>> output = counts.collect(); for (Tuple2<?, ?> tuple : output) { System.out.println(tuple._1() + ": " + tuple._2()); } ctx.stop(); } }
4. Run main in local mode
5.
Download spark-1.6.0-bin-hadoop2.6 and configure SPARK_HOME.
6. Note that this configuration is specifically for Windows.
Download the hadoop toolkit under windows (divided into 32-bit and 64-bit), and create a new hadoop directory locally. There must be a bin directory, for example: D:\spark\hadoop-2.6.0\bin
Then put winutil and other files in the bin directory
Address: https://github.com/sdravida/hadoop2.6_Win_x64/tree/master/bin
Placement HADOOP_HOME
7. Run main access, you can see the result of word segmentation