3.3 Spark RDD 键值转换操作1-partitionBy、mapValues、flatMapValues

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/yyl424525/article/details/78370796

1 partitionBy
def partitionBy(partitioner: Partitioner): RDD[(K, V)]
该函数根据partitioner函数生成新的ShuffleRDD,将原RDD重新分区。
例子:

scala> var rdd1 = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")),2)
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[23] at makeRDD at :21

scala> rdd1.partitions.size
res20: Int = 2

//查看rdd1中每个分区的元素
scala> rdd1.mapPartitionsWithIndex{
              (partIdx,iter) => {
                var part_map = scala.collection.mutable.Map[String,List[(Int,String)]]()
                  while(iter.hasNext){
                  var part_name = "part_" + partIdx;
                    var elem = iter.next()
                    if(part_map.contains(part_name)) {
                      var elems = part_map(part_name)
                      elems ::= elem
                      part_map(part_name) = elems
                    } else {
                      part_map(part_name) = List[(Int,String)]{elem}
                    }
                  }
                part_map.iterator
              }
           }.collect
res22: Array[(String, List[(Int, String)])] = Array((part_0,List((2,B), (1,A))), (part_1,List((4,D), (3,C))))
//(2,B),(1,A)在part_0中,(4,D),(3,C)在part_1中

//使用partitionBy重分区
scala> var rdd2 = rdd1.partitionBy(new org.apache.spark.HashPartitioner(2))
rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[25] at partitionBy at :23

scala> rdd2.partitions.size
res23: Int = 2

//查看rdd2中每个分区的元素
scala> rdd2.mapPartitionsWithIndex{
             (partIdx,iter) => {
                var part_map = scala.collection.mutable.Map[String,List[(Int,String)]]()
                  while(iter.hasNext){
                    var part_name = "part_" + partIdx;
                    var elem = iter.next()
                    if(part_map.contains(part_name)) {
                      var elems = part_map(part_name)
                      elems ::= elem
                      part_map(part_name) = elems
                    } else {
                      part_map(part_name) = List[(Int,String)]{elem}
                    }
                  }
                  part_map.iterator
              }
            }.collect

res24: Array[(String, List[(Int, String)])] = Array((part_0,List((4,D), (2,B))), (part_1,List((3,C), (1,A))))
//(4,D),(2,B)在part_0中,(3,C),(1,A)在part_1中

2 mapValues
def mapValues[U](f: (V) => U): RDD[(K, U)]
同基本转换操作中的map,只不过mapValues是针对[K,V]中的V值进行map操作。
例子:
scala> var rdd1 = sc.makeRDD(Array((1,”A”),(2,”B”),(3,”C”),(4,”D”)),2)
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[27] at makeRDD at :21

scala> rdd1.mapValues(x => x + “_”).collect
res26: Array[(Int, String)] = Array((1,A_), (2,B_), (3,C_), (4,D_))

3 flatMapValues
def flatMapValues[U](f: (V) => TraversableOnce[U]): RDD[(K, U)]
同基本转换操作中的flatMap,只不过flatMapValues是针对[K,V]中的V值进行flatMap操作。
例子:
scala> rdd1.flatMapValues(x => x + “_”).collect
res36: Array[(Int, Char)] = Array((1,A), (1,), (2,B), (2,), (3,C), (3,), (4,D), (4,))

猜你喜欢

转载自blog.csdn.net/yyl424525/article/details/78370796
今日推荐