RDD读取数据


文件读取
scala> val lines = sc.textFile("README.md")
scala> lines.collect()//显示
并行化读取
scala> var lines = sc.parallelize(List("i love you"))
scala> lines.collect()


coalesce() /repartition()调整分区
val rdd3 = sc.parallelize( List( 1, 2, 3,4,5,6  ), 2 )
rdd3.coalesce(4, true) // 如果分区数由少变多,要传递第二个参数改为true
                      // 默认是false 表示不shuffle
rdd3.repartition(4)    // 相当于调用了rdd3.coalesce(4, true)
  
val rdd4 = sc.parallelize( List( 1, 2, 3,4,5,6  ), 7 )
rdd4.coalesce(2) //将7个分区缩减为2个


saveAsTextFile("path")将RDD保存到文件系统中(可本地可HDFS)


Partitioner 分区器


HashPartitioner():根据key的hashCode()%分区数分块


import org.apache.spark.HashPartitioner
scala> val rdd1 = sc.parallelize(List(("a", 1), ("b", 1), ("a", 3), ("b", 2), ("c", 1),("k", 1),("w", 1)))
scala> rdd1.partitionBy(new HashPartitioner(2))
scala> Tools.debug(res49)
partition:[0]
(b,1)
(b,2)
partition:[1]
(a,1)
(a,3)
(c,1)
(k,1)
(w,1


RangePartitioner():根据Key(大小)范围排序
scala> import org.apache.spark.RangePartitioner
import org.apache.spark.RangePartitioner
scala> rdd1.partitionBy(new RangePartitioner(2, rdd1))
res53: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[41] at partitionBy at <console>:41
scala> Tools.debug(res53)
partition:[0]
(a,1)
(b,1)
(a,3)
(b,2)
partition:[1]
(c,1)
(k,1)
(w,1)

猜你喜欢

转载自blog.csdn.net/zhouzhuo_csuft/article/details/80613887
rdd