spark算子join操作

一旦分布式数据集(distData)被创建好,它们将可以被并行操作。例如,我们可以调用distData.reduce(lambda a, b: a + b)来将数组的元素相加。我们会在后续的分布式数据集运算中进一步描述。
并行集合的一个重要参数是slices,表示数据集切分的份数。Spark将会在集群上为每一份数据起一个任务。典型地,你可以在集群的每个CPU上分布2-4个slices. 一般来说,Spark会尝试根据集群的状况,来自动设定slices的数目。然而,你也可以通过传递给parallelize的第二个参数来进行手动设置。(例如:sc.parallelize(data, 10)).

spark中的join算子使用的是内连接的join,以某一个表为基础,KEY相同的打印出来,不相同的不打印

scala> val visit = spark.sparkContext.parallelize(List(("index.html","1.2.3.4"),("about.html","3,4,5,6"),("index.html","1.3.3.1"),("hello.html","1,2,3,4")),2);

visit: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[0] at parallelize at <console>:23

scala> val page = spark.sparkContext.parallelize(List(("index.html","home"),("about.html","about"),("hi.html","2.3.3.3")),2);

page: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[1] at parallelize at <console>:23

scala> visit.foreach(println(_))

(index.html,1.2.3.4)

(about.html,3,4,5,6)

(index.html,1.3.3.1)

(hello.html,1,2,3,4)

 scala> page.foreach(println(_))

(index.html,home)

(about.html,about)

(hi.html,2.3.3.3)

scala> visit.join(page).foreach(println(_))

(about.html,(3,4,5,6,about))

(index.html,(1.2.3.4,home))

(index.html,(1.3.3.1,home))

scala> page.join(visit).foreach(println(_))

(about.html,(about,3,4,5,6))

(index.html,(home,1.2.3.4))

(index.html,(home,1.3.3.1))

猜你喜欢

转载自blog.csdn.net/WxyangID/article/details/81318985
今日推荐