Spark Core:RDD编程Transformation

Transformation操作目录

map[U: ClassTag](f: T => U): RDD[U]

:将函数应用于RDD的每一元素,并返回一个新的RDD

scala> var source=sc.parallelize(1 to 10)
source: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[5] at parallelize at <console>:24

scala> source.collect
res9: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> val maprdd=source.map(_*2)
maprdd: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[6] at map at <console>:26

scala> maprdd.collect()
res10: Array[Int] = Array(2, 4, 6, 8, 10, 12, 14, 16, 18, 20)

filter(f: T => Boolean): RDD[T]

:通过提供的产生boolean条件的表达式来返回符合结果为True新的RDD


scala> var sourceFilter=sc.parallelize(Array("xiaoming","xiaohong","xiaozhang","xiaowang"))
sourceFilter: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[7] at parallelize at <console>:24

scala> val filter=sourceFilter.filter(name=>name.contains("ming"))
filter: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[8] at filter at <console>:26

scala> sourceFilter.collect
res11: Array[String] = Array(xiaoming, xiaohong, xiaozhang, xiaowang)

scala> filter.collect
res12: Array[String] = Array(xiaoming)

flatMap[U: ClassTag](f: T => TraversableOnce[U]): RDD[U]

:将函数应用于RDD中的每一项,对于每一项都产生一个集合,并将集合中的元素压扁成一个集合

scala> val sourceFlat=sc.parallelize(1 to 5)
sourceFlat: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[9] at parallelize at <console>:24

scala> sourceFlat.flatMap(x=>(1 to x))
res13: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[10] at flatMap at <console>:27

scala> sourceFlat.collect()
res14: Array[Int] = Array(1, 2, 3, 4, 5)

scala> res13.collect()
res15: Array[Int] = Array(1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5)

mapPartition[U: ClassTag]( f: Iterator[T] => Iterator[U], preservesPartitioning: Boolean = false): RDD[U]

:将函数应用于RDD的每一个分区,每一个分区运行一次,函数需要能够接受Iterator类型,然后返回Iterator


scala>  val rdd=sc.parallelize(List(("kpop","female"),("zorro","male"),("mobin","male"),("lucy","female")))
rdd: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[0] at parallelize at <console>:24


scala> :paste
// Entering paste mode (ctrl-D to finish)

def partitionsFun(iter:Iterator[(String,String)]):
Iterator[String]={
var woman=List[String]()
while(iter.hasNext){
val next=iter.next()
next match{
case (_,"female")=>woman=next._1::woman
case _=>
}
}
return woman.iterator
}

// Exiting paste mode, now interpreting.

partitionsFun: (iter: Iterator[(String, String)])Iterator[String]


scala> val result=rdd.mapPartitions(partitionsFun)
result: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at mapPartitions at <console>:28

scala> result.collect
res0: Array[String] = Array(kpop, lucy)

mapPartitionsWithIndex[U: ClassTag]( f: (Int, Iterator[T]) => Iterator[U], preservesPartitioning: Boolean = false): RDD[U]

: 将函数应用于RDD中的每一个分区,每一个分区运行一次,函数能够接受 一个分区的索引值 和一个代表分区内所有数据的Iterator类型,需要返回Iterator类型

scala>  val rdd=sc.parallelize(List(("kpop","female"),("zorro","male"),("mobin","male"),("lucy","female")))
rdd: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[0] at parallelize at <console>:24


scala> :paste
// Entering paste mode (ctrl-D to finish)

def partitionsFun(index:Int,iter:Iterator[(String,String)]):
Iterator[String]={
var woman=List[String]()
while(iter.hasNext){
val next=iter.next()
next match{
case (_,"female")=>woman="["+index+"]"+next._1::woman
case _=>
}
}
return woman.iterator
}

// Exiting paste mode, now interpreting.

partitionsFun: (index: Int, iter: Iterator[(String, String)])Iterator[String]

scala> result.collect()
res3: Array[String] = Array([0]kpop, [1]lucy)

sample(withReplacement: Boolean, fraction: Double, seed: Long = Utils.random.nextLong)

:在RDD中以seed为种子返回大致上有fraction比例个数据样本RDD,withReplacement表示是否采用放回式抽样

scala> var rdd= sc.parallelize(1 to 10)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[5] at parallelize at <console>:24

scala> rdd.collect
res6: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)


scala> val sample1=rdd.sample(true,0.4,2)
sample1: org.apache.spark.rdd.RDD[Int] = PartitionwiseSampledRDD[6] at sample at <console>:26

scala> sample1.collect
res7: Array[Int] = Array(1, 2, 2)

scala> val sample2=rdd.sample(false,0.2,3)
sample2: org.apache.spark.rdd.RDD[Int] = PartitionwiseSampledRDD[7] at sample at <console>:26

scala> sample2.collect
res8: Array[Int] = Array(1, 9)

union(other: RDD[T]): RDD[T]

:将两个RDD中的元素进行合并,返回一个新的RDD

scala> sc.parallelize(1 to 5)
res9: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[8] at parallelize at <console>:25

scala> sc.parallelize(5 to 10)
res10: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[9] at parallelize at <console>:25


scala> res9.union(res10)
res11: org.apache.spark.rdd.RDD[Int] = UnionRDD[10] at union at <console>:29

scala> res11.collect
res12: Array[Int] = Array(1, 2, 3, 4, 5, 5, 6, 7, 8, 9, 10)

intersection(other: RDD[T]): RDD[T]

:将两个RDD做交集,返回一个新的RDD

scala> sc.parallelize(1 to 5)
res9: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[8] at parallelize at <console>:25

scala> sc.parallelize(5 to 10)
res10: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[9] at parallelize at <console>:25

scala> res9.intersection(res10)
res13: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[16] at intersection at <console>:29

scala> res13.collect
res14: Array[Int] = Array(5)

distinct(): RDD[T]

:将当前RDD进行去重后,返回一个新的RDD

scala> sc.parallelize(Array(1,2,3,4,2,3,4,1,2))
res16: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[17] at parallelize at <console>:25

scala> res16.collect
res17: Array[Int] = Array(1, 2, 3, 4, 2, 3, 4, 1, 2)

scala> res16.distinct
res18: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[20] at distinct at <console>:27

scala> res18.collect
res19: Array[Int] = Array(4, 2, 1, 3)

partitionBy(partitioner: Partitioner): RDD[(K, V)]

: 根据设置的分区器重新将RDD进行分区,返回新的RDD

scala> var rdd=sc.parallelize(Array((1,"aa"),(2,"bb"),(3,"cc"),(4,"dd")),4)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[21] at parallelize at <console>:24

scala> rdd.partitions.size
res20: Int = 4

scala> rdd.partitionBy(new org.apache.spark.HashPartitioner(2))
res21: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[22] at partitionBy at <console>:27

scala> res21.partitions.size
res23: Int = 2

reduceByKey(func: (V, V) => V): RDD[(K, V)] :

根据Key值将相同Key的元组的值用func进行计算,返回新的RDD

scala> val rdd=sc.parallelize(List(("f",1),("m",1),("f",2),("m",3)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[23] at parallelize at <console>:24

scala> val reduce=rdd.reduce
reduce   reduceByKey   reduceByKeyLocally

scala> val reduce=rdd.reduceByKey(_+_)
reduce: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[24] at reduceByKey at <console>:26

scala> reduce.collect
res24: Array[(String, Int)] = Array((f,3), (m,4))

groupByKey(): RDD[(K, Iterable[V])]

:将相同Key的值进行聚集,输出一个(K, Iterable[V])类型的RDD

scala> val arr=Array("a","b","c","a","b","a")
arr: Array[String] = Array(a, b, c, a, b, a)

scala> val rdd=sc.parallelize(arr).map((_,1))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[26] at map at <console>:26

scala> rdd.groupByKey()
res25: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[27] at groupByKey at <console>:29

scala> res25.collect()
res26: Array[(String, Iterable[Int])] = Array((b,CompactBuffer(1, 1)), (a,CompactBuffer(1, 1, 1)), (c,CompactBuffer(1)))

combineByKey[C](createCombiner: V => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C, numPartitions: Int): RDD[(K, C)]

:根据key分别使用CreateCombiner和mergeValue进行相同key的数值聚集,通过mergeCombiners将各个分区最终的结果进行聚集

scala> val scores=Array(("Fred",88),("Fred",95),("Fred",91),("Wilma",93),("Wilma",95),("Wilma",98))
scores: Array[(String, Int)] = Array((Fred,88), (Fred,95), (Fred,91), (Wilma,93), (Wilma,95), (Wilma,98))

scala> val input=sc.parallelize(scores)
input: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[28] at parallelize at <console>:26

scala> val combine=input.combineByKey
combineByKey   combineByKeyWithClassTag

scala> val combine=input.combineByKey(
     | (v)=>(v,1),
     | (acc:(Int,Int),v)=>(acc._1+v,acc._2+1),
     | (acc1:(Int,Int),acc2:(Int,Int))=>(acc1._1+acc2._1,acc1._2+acc2._2))
combine: org.apache.spark.rdd.RDD[(String, (Int, Int))] = ShuffledRDD[29] at combineByKey at <console>:28

scala> val result=combine.map{
     | case (key,value)=>(key,value._1/value._2.toDouble)}
result: org.apache.spark.rdd.RDD[(String, Double)] = MapPartitionsRDD[30] at map at <console>:30

scala> result.collect
res27: Array[(String, Double)] = Array((Wilma,95.33333333333333), (Fred,91.33333333333333))

aggregateByKey[U: ClassTag](zeroValue: U, partitioner: Partitioner)(seqOp: (U, V) => U,combOp: (U, U) => U): RDD[(K, U)]

: 通过seqOp函数将每一个分区里面的数据和初始值迭代带入函数返回最终值,comOp将每一个分区返回的最终值根据key进行合并操作。

scala> val rdd=sc.parallelize(List((1,3),(1,2),(1,4),(2,3),(3,6),(3,8)),1)
rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[31] at parallelize at <console>:24

scala> val agg=rdd.aggregateByKey(0)(math.max(_,_),_+_).collect()
agg: Array[(Int, Int)] = Array((1,4), (3,8), (2,3))

scala> val rdd2=sc.parallelize(List((1,3),(1,2),(1,4),(2,3),(3,6),(3,8)),3)
rdd2: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[33] at parallelize at <console>:24

scala> val agg1=rdd2.aggregateByKey(0)(math.max(_,_),_+_).collect()
agg1: Array[(Int, Int)] = Array((3,8), (1,7), (2,3))

foldByKey(zeroValue: V, partitioner: Partitioner)(func: (V, V) => V): RDD[(K, V)]

:aggregateByKey的简化操作,seqop和combop相同,


scala> val rdd=sc.parallelize(List((1,3),(1,2),(1,4),(2,3),(3,6),(3,8)),3)
rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.foldByKey(0)(_+_).collect
res0: Array[(Int, Int)] = Array((3,14), (1,9), (2,3))

sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.length): RDD[(K, V)]

:在一个(K,V)的RDD上调用,K必须实现Ordered接口,返回一个按照key进行排序的(K,V)的RDD

scala> val rdd=sc.parallelize(Array((3,"aa"),(6,"cc"),(2,"bb"),(1,"dd")))
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[2] at parallelize at <console>:24

scala> rdd.sortByKey(true).collect
res1: Array[(Int, String)] = Array((1,dd), (2,bb), (3,aa), (6,cc))

scala> rdd.sortByKey(false).collect
res2: Array[(Int, String)] = Array((6,cc), (3,aa), (2,bb), (1,dd))

sortBy[K]( f: (T) => K, ascending: Boolean = true,numPartitions: Int = this.partitions.length) (implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T]

:底层实现还是使用sortByKey,只不过使用fun生成的新key进行排序。

scala> val rdd=sc.parallelize(List(1,2,4,3))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[9] at parallelize at <console>:24

scala> rdd.sortBy(x=>x).collect
res3: Array[Int] = Array(1, 2, 3, 4)

scala> rdd.sortBy(x=>x%3).collect
res4: Array[Int] = Array(3, 1, 4, 2)

join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))]

:在类型为(K,V)和(K,W)的RDD上调用,返回一个相同key对应的所有元素对在一起的(K,(V,W))的RDD,但是需要注意的是,他只会返回key在两个RDD中都存在的情况。

scala> val rdd1=sc.parallelize(Array((1,"a"),(2,"b"),(3,"c")))
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[20] at parallelize at <console>:24

scala> val rdd2=sc.parallelize(Array((1,1),(2,2),(4,4)))
rdd2: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[21] at parallelize at <console>:24

scala> rdd1.join(rdd2).collect
res6: Array[(Int, (String, Int))] = Array((2,(b,2)), (1,(a,1)))

cogroup[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W]))]

:在类型为(K,V)和(K,W)的RDD上调用,返回一个(K,(Iterable,Iterable))类型的RDD,注意,如果V和W的类型相同,也不放在一块,还是单独存放。


scala> val rdd1=sc.parallelize(Array((1,"a"),(2,"b"),(3,"c")))
rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[20] at parallelize at <console>:24

scala> val rdd2=sc.parallelize(Array((1,1),(2,2),(4,4)))
rdd2: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[21] at parallelize at <console>:24

scala> rdd1.cogroup(rdd2).collect
res7: Array[(Int, (Iterable[String], Iterable[Int]))] = Array((4,(CompactBuffer(),CompactBuffer(4))), (2,(CompactBuffer(b),CompactBuffer(2))), (1,(CompactBuffer(a),CompactBuffer(1))), (3,(CompactBuffer(c),CompactBuffer())))

cartesian[U: ClassTag](other: RDD[U]): RDD[(T, U)]

:做两个RDD的笛卡尔积,返回对偶的RDD

scala> val rdd1=sc.parallelize(1 to 3)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[30] at parallelize at <console>:24

scala> val rdd2=sc.parallelize(2 to 5)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[31] at parallelize at <console>:24

scala> rdd1.cartesian(rdd2).collect
res8: Array[(Int, Int)] = Array((1,2), (1,3), (1,4), (1,5), (2,2), (2,3), (3,2), (3,3), (2,4), (2,5), (3,4), (3,5))

pipe(command: String): RDD[String]

:对于每个分区,都执行一个perl或者shell脚本,返回输出的RDD,注意,如果你是本地文件系统中,需要将脚本放置到每个节点上。
shell脚本pipe.sh

#! /bin/sh
echo "AA"
while read LINE;do
echo ">>>"${LINE}
done
scala> val rdd =sc.parallelize(List("hi","hello","how","are","you"),1)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> rdd.pipe("/root/pipe.sh").collect
res1: Array[String] = Array(AA, >>>hi, >>>hello, >>>how, >>>are, >>>you)

scala> val rdd =sc.parallelize(List("hi","hello","how","are","you"),2)
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[3] at parallelize at <console>:24

scala> rdd.pipe("/root/pipe.sh").collect
res2: Array[String] = Array(AA, >>>hi, >>>hello, AA, >>>how, >>>are, >>>you)

coalesce(numPartitions: Int, shuffle: Boolean = false, partitionCoalescer: Option[PartitionCoalescer] = Option.empty) (implicit ord: Ordering[T] = null) : RDD[T]

:缩减分区数,用于大数据集过滤后,提高小数据集的执行效率。

scala> val rdd=sc.parallelize(1 to 16 ,4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[7] at parallelize at <console>:24

scala> val coalesce=rdd.coalesce(3)
coalesce: org.apache.spark.rdd.RDD[Int] = CoalescedRDD[8] at coalesce at <console>:26

scala> coalesce.partitions.size
res8: Int = 3

repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T]

:根据你传入的分区数重新通过网络分区所有数据,重型操作。

scala> val rdd=sc.parallelize(1 to 16 ,4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[9] at parallelize at <console>:24

scala> val coalesce=rdd.repartition(2)
coalesce: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[13] at repartition at <console>:26

scala> coalesce.partitions.size
res9: Int = 2

scala> val coalesce=rdd.repartition(4)
coalesce: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[17] at repartition at <console>:26

scala> coalesce.partitions.size
res10: Int = 4

glom(): RDD[Array[T]]

:将每一个分区形成一个数组,形成新的RDD类型时RDD[Array[T]]

scala> val rdd=sc.parallelize(1 to 16 ,4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[19] at parallelize at <console>:24

scala> rdd.glom().collect
res11: Array[Array[Int]] = Array(Array(1, 2, 3, 4), Array(5, 6, 7, 8), Array(9, 10, 11, 12), Array(13, 14, 15, 16))

mapValues[U](f: V => U): RDD[(K, U)]

:将函数应用于(k,v)结果中的v,返回新的RDD

scala> val rdd=sc.parallelize(Array((1,"a"),(1,"d"),(2,"b"),(3,"c")))
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[21] at parallelize at <console>:24

scala> rdd.mapValues(_+"|||").collect
res12: Array[(Int, String)] = Array((1,a|||), (1,d|||), (2,b|||), (3,c|||))

subtract(other: RDD[T]): RDD[T]

计算差的一种函数去除两个RDD中相同的元素,不同的RDD将保留下来。

scala> val rdd=sc.parallelize(3 to 8)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[23] at parallelize at <console>:24

scala> val rdd2=sc.parallelize(1 to 5)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[24] at parallelize at <console>:24

scala> rdd.subtract(rdd2).collect
res13: Array[Int] = Array(6, 8, 7)

猜你喜欢

转载自blog.csdn.net/drl_blogs/article/details/92564012