【Spark】Spark四: Spark RDD API初步第二部分

RDD Transform

  • join
  • union
  • groupByKey

RDD Action

  • reduce
  • lookup

 join、union和groupByKey是RDD中Transform部分的API;而reduce和lookup是RDD中Action部分的API

Union

 Union是将两个RDD中数据取并集,然后得到一个新的RDD

scala> var rdd1 = sc.parallelize(List("MQ", "Zookeeper"));
rdd1: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[27] at parallelize at <console>:12

scala> var rdd2 = sc.parallelize(List("Redis", "MongoDB"));
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[28] at parallelize at <console>:12

scala> val result = rdd1 union rdd2
result: org.apache.spark.rdd.RDD[String] = UnionRDD[30] at union at <console>:16

scala> result.collect

///结果
res15: Array[String] = Array(MQ, Zookeeper, Redis, MongoDB)

scala>result.count

///结果
res17: Long = 2

 Join

   如下所示,join操作是把两个RDD的数据进行了连接操作,类似于SQL,这个链接操作的依据是Key

scala> val rdd1 = sc.parallelize(List(('a',1),('a',2),('b',3),('c',5)));
rdd1: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[31] at parallelize at <console>:12

scala> val rdd2 = sc.parallelize(List(('a',6),('b',4),('b',9),('c',1),('d',2)));
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[32] at parallelize at <console>:12

scala> var result = rdd1 join rdd2
result: org.apache.spark.rdd.RDD[(Char, (Int, Int))] = FlatMappedValuesRDD[35] at join at <console>:16


scala>result.collect
///结果
res18: Array[(Char, (Int, Int))] = Array((a,(1,6)), (a,(2,6)), (b,(3,4)), (b,(3,9)), (c,(5,1)))
扫描二维码关注公众号,回复: 569124 查看本文章

groupByKey

1.在HDFS中准备数据,,如下所示:

[hadoop@hadoop spark-1.2.0-bin-hadoop2.4]$ hdfs dfs -cat /users/hadoop/wordcount/word.txt
A B
B C
C D
D E
C F

2. 将上面的数据读取到RDD中

scala> val rdd = sc.textFile("hdfs://users/hadoop/wordcount/word.txt");

////结果
rdd: org.apache.spark.rdd.RDD[String] = hdfs://users/hadoop/wordcount/word.txt MappedRDD[37] at textFile at <console>:12 

3.使用groupByKey

scala> val rdd = sc.textFile("hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt");
///结果
rdd: org.apache.spark.rdd.RDD[String] = hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt MappedRDD[1] at textFile at <console>:12


scala> val groupedWordCountRDD = rdd.flatMap(_.split(" ")).map((_,1)).groupByKey;
///结果
groupedWordCountRDD: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[4] at groupByKey at <console>:14


scala> groupedWordCountRDD.collect
///结果
res0: Array[(String, Iterable[Int])] = Array((B,CompactBuffer(1, 1)), (A,CompactBuffer(1)), ("",CompactBuffer(1)), (C,CompactBuffer(1, 1, 1)), (E,CompactBuffer(1)), (F,CompactBuffer(1)), (D,CompactBuffer(1, 1))) 

得到的结果是一个数组,数组元素类型是(String, Iteratable[Int]),这类似于Hadoop MapReduce的Reduce阶段的输入Key和输入Values

Reduce

Reduce是RDD的Action类型的操作,调用RDD.reduce方法会触发作业的提交和运行

scala> val rdd = sc.parallelize(List(1,3,58,11))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[5] at parallelize at <console>:12

scala> rdd.reduce(_+_)


////结果:求和计算
res3: Int = 73

 

问题:对上面的groupedWordCountRDD进行reduce操作为什么会失败?

scala> groupedWordCountRDD.reduce(_+_)

///结果显示类型不匹配
<console>:17: error: type mismatch;
 found   : (String, Iterable[Int])
 required: String
              groupedWordCountRDD.reduce(_+_)
                                           ^

 

lookup

lookup操作用于在RDD中查询

scala> val rdd = sc.parallelize(List(('a',1),('b',2), ('b',19),('c', 8),('a',100)));
rdd: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[6] at parallelize at <console>:12

///结果
res6: Seq[Int] = WrappedArray(2, 19)

 

toDebugString

scala> val rdd = sc.textFile("hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt");
///结果
rdd: org.apache.spark.rdd.RDD[String] = hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt MappedRDD[1] at textFile at <console>:12

scala> rdd.toDebugString
///结果:
scala> rdd.toDebugString
15/01/02 06:53:21 INFO mapred.FileInputFormat: Total input paths to process : 1
res0: String = 
(1) hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt MappedRDD[1] at textFile at <console>:12 []
 |  hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt HadoopRDD[0] at textFile at <console>:12 []

 

toDebugString可以用来查看RDD之间的lineage关系,即RDD之间的转换关系,上例说明,首先,HDFS上的文本数据会构造一个HadoopRDD,然后MappedRDD是由HadoopRDD转换来的

 

再举一个例子:

scala> val groupedWordCountRDD = rdd.flatMap(_.split(" ")).map((_,1)).groupByKey;
groupedWordCountRDD: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[4] at groupByKey at <console>:14

scala> groupedWordCountRDD.toDbugString
<console>:17: error: value toDbugString is not a member of org.apache.spark.rdd.RDD[(String, Iterable[Int])]
              groupedWordCountRDD.toDbugString
                                  ^

scala> groupedWordCountRDD.toDebugString
res2: String = 
(1) ShuffledRDD[4] at groupByKey at <console>:14 []
 +-(1) MappedRDD[3] at map at <console>:14 []
    |  FlatMappedRDD[2] at flatMap at <console>:14 []
    |  hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt MappedRDD[1] at textFile at <console>:12 []
    |  hdfs://hadoop.master:9000/users/hadoop/wordcount/word.txt HadoopRDD[0] at textFile at <console>:12 []

内容参考自:Spark实战高手之路系列

猜你喜欢

转载自bit1129.iteye.com/blog/2171811