Spark Core 操作

版权声明:本文为博主原创文章,转载请声明本博主原创 https://blog.csdn.net/weixin_39381833/article/details/85345796

创建一个SparkContext对象

初始化Spark Spark程序必须做的第一件事是创建一个SparkContext对象,该对象告诉Spark如何访问集群。要创建SparkContext,您首先需要构建一个包含有关应用程序信息的SparkConf对象。每个JVM只有一个SparkContext可能是活动的。 在创建新的SparkContext之前,必须先停止活动状态的SparkContext。

//val conf = new SparkConf().setAppName("SparkCore_RDD").setMaster("local[4]")
val conf = new SparkConf().setAppName("SparkCore_RDD").setMaster("local[4]")
val sc = new SparkContext(conf)

============================================================

//通过并行化生成rdd
val rdd2_1 = sc.parallelize(List(5, 6, 4, 7, 3, 8, 2, 9, 1, 10))
//对rdd1里的每一个元素乘2然后排序
val rdd2_2 = rdd2_1.map(_ * 2).sortBy(x => x, true)
//过滤出大于等于十的元素
val rdd2_3 = rdd2_2.filter(_ >= 10)
//输出
rdd2_2.foreach(x => println(x))
rdd2_3.foreach(x => println(x))

===========================================================

val rdd3_1 = sc.parallelize(Array("a b c", "d e f", "h i j"))
//将rdd1里面的每一个元素先切分在压平
val rdd3_2 = rdd3_1.flatMap(_.split(' '))
rdd3_2.foreach(x =>println(x))

===========================================================

val rdd4_1 = sc.parallelize(List(5, 6, 4, 3))
val rdd4_2 = sc.parallelize(List(1, 2, 3, 4))
//求并集
val rdd4_3 = rdd4_1.union(rdd4_2)
//求交集
val rdd4_4 = rdd4_1.intersection(rdd4_2)
rdd4_3.foreach(x =>println(x))
rdd4_4.foreach(x =>println(x))

===============================================================

val rdd5_1 = sc.parallelize(List(("tom", 1), ("jerry", 3), ("kitty", 2)))
val rdd5_2 = sc.parallelize(List(("jerry", 2), ("tom", 1), ("shuke", 2)))
//求jion
val rdd5_3 = rdd5_1.join(rdd5_2)
//求并集
val rdd5_4 = rdd5_1 union rdd5_2
//按key进行分组
val rdd5_5 = rdd5_4.groupByKey
rdd5_3.foreach(x =>println(x))
print("--------------------------")
rdd5_5.foreach(x =>println(x))

==============================================================

val rdd6_1 = sc.parallelize(List(("tom", 1), ("tom", 2), ("jerry", 3), ("kitty", 2)))
val rdd6_2 = sc.parallelize(List(("jerry", 2), ("tom", 1), ("shuke", 2)))
//cogroup
val rdd6_3 = rdd6_1.cogroup(rdd6_2)
//注意cogroup与groupByKey的区别
rdd6_3.foreach(x =>println(x))

=================================================================

val rdd7_1 = sc.parallelize(List(1, 2, 3, 4, 5))
//reduce聚合
val rdd7_2 = rdd7_1.reduce(_ + _)
println(rdd7_2)

===============================================================

val rdd8_1 = sc.parallelize(List(("tom", 1), ("jerry", 3), ("kitty", 2),  ("shuke", 1)))
val rdd8_2 = sc.parallelize(List(("jerry", 2), ("tom", 3), ("shuke", 2), ("kitty", 5)))
val rdd8_3 = rdd8_1.union(rdd8_2)
//按key进行聚合
val rdd8_4 = rdd8_3.reduceByKey(_ + _)
rdd8_4.foreach(x => println(x))

=================================================================

咬合操作
val rdd21_1 = sc.parallelize(Array(1, 2, 3, 4), 3)
val rdd21_2 = sc.parallelize(Array("a", "b", "c", "d"), 3)
val rdd21_3 = rdd21_1.zip(rdd21_2).foreach(println)

================================================================

Spark shell 中编写WordCount程序
首先启动hdfs
向hdfs上传一个文件到hdfs://node1.itcast.cn:9000/words.txt
在spark shell中用scala语言编写spark程序
sc.textFile("hdfs://node1.itcast.cn:9000/words.txt").flatMap(_.split(" "))
.map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfs://node1.itcast.cn:9000/out")

使用hdfs命令查看结果

hdfs dfs -ls hdfs://node1.itcast.cn:9000/out/p*

说明:


sc是SparkContext对象,该对象时提交spark程序的入口
textFile(hdfs://node1.itcast.cn:9000/words.txt)是hdfs中读取数据
flatMap(_.split(" "))先map在压平
map((_,1))将单词和1构成元组
reduceByKey(_+_)按照key进行reduce,并将value累加
saveAsTextFile("hdfs://node1.itcast.cn:9000/out")将结果写入到hdfs中

猜你喜欢

转载自blog.csdn.net/weixin_39381833/article/details/85345796
今日推荐