启动Eclipse建立Java项目,建立一个测试的类 比如WordCount 用来统计文件中的字数
添加需要的jar库,选择菜单project=>properties,
然后在左侧选择java build path 右侧选择libraries,
然后选择add external jars 选择spark目录下jars目录下所有文件
WordCount.java代码
package test.spark; import scala.Tuple2; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import org.apache.spark.sql.SparkSession; import java.util.Arrays; import java.util.Iterator; import java.util.List; import java.util.regex.Pattern; public final class WordCount { private static final Pattern SPACE = Pattern.compile(" "); public static void main(String[] args) throws Exception { if (args.length < 1) { System.err.println("Usage: JavaWordCount <file>"); System.exit(1); } SparkSession spark = SparkSession .builder() .appName("JavaWordCount") .getOrCreate(); JavaRDD<String> lines = spark.read().textFile(args[0]).javaRDD(); JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() { @Override public Iterator<String> call(String s) { return Arrays.asList(SPACE.split(s)).iterator(); } }); JavaPairRDD<String, Integer> ones = words.mapToPair( new PairFunction<String, String, Integer>() { @Override public Tuple2<String, Integer> call(String s) { return new Tuple2<>(s, 1); } }); JavaPairRDD<String, Integer> counts = ones.reduceByKey( new Function2<Integer, Integer, Integer>() { @Override public Integer call(Integer i1, Integer i2) { return i1 + i2; } }); List<Tuple2<String, Integer>> output = counts.collect(); for (Tuple2<?,?> tuple : output) { System.out.println(tuple._1() + ": " + tuple._2()); } spark.stop(); } }
右键选择刚才建立的项目,选择 run as => run configurations , 选择arguments,
在program arguments 填入一个测试字数的文件,比如read.txt
在vm arguments参数设置中 添加程序运行的环境参数 比如 -Dspark.master=local -Xmx1g
然后 运行程序