Spark RDD (章节二)

Spark RDD (章节二)

Apache Spark常见问题解答

Spark与Apache Hadoop有何关系?

Spark是与Hadoop数据兼容的快速通用处理引擎。它可以通过YARN或Spark的独立模式在Hadoop群集中运行,并且可以处理HDFS,HBase,Cassandra,Hive和任何Hadoop InputFormat中的数据。它旨在执行批处理(类似于MapReduce)和新的工作负载,例如流,交互式查询和机器学习。

我的数据需要容纳在内存中才能使用Spark吗?

不会。Spark的操作员会在不适合内存的情况下将数据溢出到磁盘上,从而使其可以在任何大小的数据上正常运行。同样,根据RDD的存储级别确定,不适合内存的缓存数据集会溢出到磁盘上,或者在需要时即时重新计算。

我需要Hadoop运行Spark吗?

否,但是如果您在集群上运行,则将需要某种形式的共享文件系统(例如,将NFS安装在每个节点的相同路径上)。如果您具有这种类型的文件系统,则可以仅在独立模式下部署Spark。

Spark RDD

总览

总体来说,每个Spark应用程序都包含一个驱动程序Driver,该程序运行用户的main功能并在集群上执行各种并行操作。
Spark提供的主要抽象是弹性分布式数据集resilient distributed dataset(RDD),它是跨集群节点划分的元素的集合,可以并行操作。
通过从Hadoop文件系统(或任何其他Hadoop支持的文件系统)中的文件或驱动程序中现有的Scala集合开始并进行转换来创建RDD。用户还可以要求Spark将RDD 持久存储在内存中,从而使其可以在并行操作中高效地重用。最后,RDD会自动从节点故障中恢复。

开发环境

导入maven依赖

<!--Spark RDD依赖-->
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.11</artifactId>
    <version>2.4.5</version>
</dependency>
<!--和HDFS 集成-->
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>2.9.2</version>
</dependency>

插件放在 内

scala的编译插件

<!--scala编译插件-->
<plugin>
    <groupId>net.alchim31.maven</groupId>
    <artifactId>scala-maven-plugin</artifactId>
    <version>4.0.1</version>
    <executions>
        <execution>
            <id>scala-compile-first</id>
            <phase>process-resources</phase>
            <goals>
                <goal>add-source</goal>
                <goal>compile</goal>
            </goals>
        </execution>
    </executions>
</plugin>

打包fat jar插件

<!--创建fatjar插件-->
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.4.3</version>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
            <configuration>
                <filters>
                    <filter>
                        <artifact>*:*</artifact>
                        <excludes>
                            <exclude>META-INF/*.SF</exclude>
                            <exclude>META-INF/*.DSA</exclude>
                            <exclude>META-INF/*.RSA</exclude>
                        </excludes>
                    </filter>
                </filters>
            </configuration>
        </execution>
    </executions>
</plugin>

JDK编译版本插件(可选)

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.2</version>
    <configuration>
        <source>1.8</source>
        <target>1.8</target>
        <encoding>UTF-8</encoding>
    </configuration>
    <executions>
        <execution>
            <phase>compile</phase>
            <goals>
                <goal>compile</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Driver编写

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object SparkWordCountApplication1 {
    // Driver
    def main(args: Array[String]): Unit = {
        //1.创建SparkContext
        val conf = new SparkConf()
        .setMaster("spark://CentOS:7077")
        .setAppName("SparkWordCountApplication")
        val sc = new SparkContext(conf)
        //2.创建RDD - 细化
        val linesRDD: RDD[String] = sc.textFile("hdfs:///demo/words")
        //3.RDD->RDD 转换 lazy 并⾏的 - 细化
        var resultRDD:RDD[(String,Int)]=linesRDD.flatMap(line=> line.split("\\s+"))
        .map(word=>(word,1))
        .reduceByKey((v1,v2)=>v1+v2)
        //4.RDD-> Unit或者本地集合Array|List 动作转换 触发job执⾏
        val resutlArray: Array[(String, Int)] = resultRDD.collect()
        //Scala本地集合运算和Spark脱离关系
        resutlArray.foreach(t=>println(t._1+"->"+t._2))
        //5.关闭SparkContext
        sc.stop()
    }
}

使用 maven package 进行打包,将生成的jar(大的那个,不带有spark)上传到centos

使用 spark submit 命令提交任务

[root@centos ~]# cd /usr/soft/spark-2.4.5/
[root@centos spark-2.4.5]# ./bin/spark-submit --master spark://centos:7077 --deploy-mode client --class com.baizhi.sparktest.SparkWordCountApplication1 --name wordcount --total-executor-cores 6 /root/spark_test01-1.0-SNAPSHOT.jar

Spark在 idea 中提供了本地测试的⽅法

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object SparkWordCountApplication2 {
  // Driver
  def main(args: Array[String]): Unit = {
    //1.创建SparkContext
    val conf = new SparkConf()
      .setMaster("local[6]")        //手动指定分配的核数,给*自动分配核数
      .setAppName("SparkWordCountApplication")
    val sc = new SparkContext(conf)
    //关闭日志显示
    sc.setLogLevel("ERROR")
    //2.创建RDD - 细化                               //此处写绝对路径或本地路径下的文件
    val linesRDD: RDD[String] = sc.textFile("hdfs://centos:9000/demp/words")
    //3.RDD->RDD 转换 lazy 并⾏的 - 细化
    var resultRDD:RDD[(String,Int)]=linesRDD.flatMap(line=> line.split("\\s+"))
      .map(word=>(word,1))
      .reduceByKey((v1,v2)=>v1+v2)
    //4.RDD-> Unit或者本地集合Array|List 动作转换 触发job执⾏
    val resutlArray: Array[(String, Int)] = resultRDD.collect()
    //Scala本地集合运算和Spark脱离关系
    resutlArray.foreach(t=>println(t._1+"->"+t._2))
    //5.关闭SparkContext
    sc.stop()
  }
}

需要resource导⼊log4j.poperties

log4j.rootLogger = FATAL,stdout
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = %p %d{yyyy-MM-dd HH:mm:ss} %c %m%n

RDD创建

Scala的概念围绕着弹性分布式数据集(RDD)展开,RDD是一个具有较高容错性且可并行操作的元素集合。

创建RDD的方式有两种:

① 在Driver并行化现有的scala集合

② 引用外部的存储文件系统(共享⽂件系统,HDFS,HBase或提供Hadoop InputFormat的任何数据源)中的数据集

Parallelized Collections --了解

通过在Driver程序中的现有集合(Scala Seq)上调⽤SparkContext的 parallelize 或者 makeRDD ⽅法来创建并⾏集合。复制集合的元素以形成可以并⾏操作的分布式数据集。例如,以下是创建包含数字1到5的并⾏化集合的⽅法:

scala> val data = Array(1, 2, 3, 4, 5)
data: Array[Int] = Array(1, 2, 3, 4, 5)
scala> val distData = sc.parallelize(data)     //  sc.makeRDD(data)
distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at
<console>:26

并⾏集合的可以指定⼀个分区参数,⽤于指定计算的并⾏度。Spark集群的为每个分区运⾏⼀个任务(进程)。
当⽤户不指定分区的时候,sc会根据系统分配到的资源⾃动做分区。例如:

[root@CentOS spark-2.4.5]# ./bin/spark-shell --master spark://centos:7077 --total-executor-cores 6
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = spark://CentOS:7077, app id = app20200208013551-0006).
Spark session available as 'spark'.
Welcome to
 ____ __
 / __/__ ___ _____/ /__
 _\ \/ _ \/ _ `/ __/ '_/
 /___/ .__/\_,_/_/ /_/\_\ version 2.4.5
 /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_231)
Type in expressions to have them evaluated.
Type :help for more information.
scala>

系统会⾃动在并⾏化集合的时候,指定分区数为6。⽤户也可以⼿动指定分区数(一般为线程数的2到4倍)

scala> val distData = sc.parallelize(data,10)
distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[1] at parallelize at
<console>:26
scala> distData.getNumPartitions
res1: Int = 10

External Datasets

Spark可以从Hadoop⽀持的任何存储源创建分布式数据集,包括您的本地⽂件系统,HDFS,HBase,
Amazon S3、RDBMS(MySQL)等。

本地文件系统
scala> sc.textFile("file:///root/t_word").collect
res6: Array[String] = Array(this is a demo, hello spark, "good good study ", "day day
up ", come on baby)
读HDFS

textFile

会将⽂件转换为RDD[String]集合对象,每⼀⾏⽂件表示RDD集合中的⼀个元素

scala> sc.textFile("hdfs:///demo/words/t_word").collect
res7: Array[String] = Array(this is a demo, hello spark, "good good study ", "day day
up ", come on baby)

该参数也可以指定分区数,但是需要分区数 >= ⽂件系统数据块的个数,所以⼀般在不知道的情况下,⽤户可以省略不给。

wholeTextFiles

会将⽂件转换为RDD[(String,String)]集合对象,RDD中每⼀个元组元素表示⼀个⽂件。其中 _1 ⽂件名 _2 ⽂件内容

scala> sc.wholeTextFiles("hdfs:///demo/words",1).collect
res26: Array[(String, String)] =
Array((hdfs://CentOS:9000/demo/words/t_word,"this is a demo
       hello spark
       good good study
       day day up
       come on baby
       "))
scala> sc.wholeTextFiles("hdfs:///demo/words",1).map(t=>t._2).flatMap(context=>context.split(
       "\n")).collect
res25: Array[String] = Array(this is a demo, hello spark, "good good study ", "day day
up ", come on baby)
读MySQL

newAPIHadoopRDD

<!--MySQL依赖-->
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.38</version>
</dependency>
object SparkNewHadoopAPIMySQL {
    // Driver
    def main(args: Array[String]): Unit = {
        //1.创建SparkContext
        val conf = new SparkConf()
        .setMaster("local[*]")
        .setAppName("SparkWordCountApplication")
        val sc = new SparkContext(conf)
        val hadoopConfig = new Configuration()
        DBConfiguration.configureDB(hadoopConfig, //配置数据库的链接参数
                                    "com.mysql.jdbc.Driver",
                                    "jdbc:mysql://localhost:3306/test",
                                    "root",
                                    "root"
                                   )
        //设置查询相关属性
        hadoopConfig.set(DBConfiguration.INPUT_QUERY,"select id,name,password,birthDayfrom t_user")
        hadoopConfig.set(DBConfiguration.INPUT_COUNT_QUERY,"select count(id) from t_user")
        hadoopConfig.set(DBConfiguration.INPUT_CLASS_PROPERTY,"com.baizhi.createrdd.UserDBWritable")
        //通过Hadoop提供的InputFormat读取外部数据源
        val jdbcRDD:RDD[(LongWritable,UserDBWritable)] = sc.newAPIHadoopRDD(
            hadoopConfig, //hadoop配置信息
            classOf[DBInputFormat[UserDBWritable]], //输⼊格式类
            classOf[LongWritable], //Mapper读⼊的Key类型
            classOf[UserDBWritable] //Mapper读⼊的Value类型
        )
        jdbcRDD.map(t=>(t._2.id,t._2.name,t._2.password,t._2.birthDay))
        .collect() //动作算⼦ 远程数据 拿到 Driver端 ,⼀般⽤于⼩批量数据测试
        .foreach(t=>println(t))
        //jdbcRDD.foreach(t=>println(t))//动作算⼦,远端执⾏ ok
        //jdbcRDD.collect().foreach(t=>println(t)) 因为UserDBWritable、LongWritable都没法序列化 error
        //5.关闭SparkContext
        sc.stop()
    }
}
class UserDBWritable extends DBWritable {
    var id:Int=_
    var name:String=_
    var password:String=_
    var birthDay:Date=_
    //主要⽤于DBOutputFormat,因为使⽤的是读取,该⽅法可以忽略
    override def write(preparedStatement: PreparedStatement): Unit = {}
    //在使⽤DBInputFormat,需要将读取的结果集封装给成员属性
    override def readFields(resultSet: ResultSet): Unit = {
        id=resultSet.getInt("id")
        name=resultSet.getString("name")
        password=resultSet.getString("password")
        birthDay=resultSet.getDate("birthDay")
    }
}

导入DBWritable 时要导 import org.apache.hadoop.mapred.lib.db.{DBInputFormat, DBWritable}

读hbase

引入依赖

<!--Hbase依赖,注意顺序-->
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-auth</artifactId>
    <version>2.9.2</version>
</dependency>
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>1.2.4</version>
</dependency>
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-server</artifactId>
    <version>1.2.4</version>
</dependency>
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HConstants
import org.apache.hadoop.hbase.client.{Result, Scan}
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapreduce.{TableInputFormat, TableMapReduceUtil}
import org.apache.hadoop.hbase.protobuf.ProtobufUtil
import org.apache.hadoop.hbase.util.{Base64, Bytes}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object SparkNewHadoopAPIHbase {
    // Driver
    def main(args: Array[String]): Unit = {
        //1.创建SparkContext
        val conf = new SparkConf()
        .setMaster("local[*]")
        .setAppName("SparkWordCountApplication")
        val sc = new SparkContext(conf)
        val hadoopConf = new Configuration()
        hadoopConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")//hbase链接参数
        hadoopConf.set(TableInputFormat.INPUT_TABLE,"baizhi:t_user")
        val scan = new Scan() //构建查询项
        val pro = ProtobufUtil.toScan(scan)
        hadoopConf.set(TableInputFormat.SCAN,Base64.encodeBytes(pro.toByteArray))
        val hbaseRDD:RDD[(ImmutableBytesWritable,Result)] = sc.newAPIHadoopRDD(
            hadoopConf, //hadoop配置
            classOf[TableInputFormat],//输⼊格式
            classOf[ImmutableBytesWritable], //Mapper key类型
            classOf[Result]//Mapper Value类型
        )
        hbaseRDD.map(t=>{
            val rowKey = Bytes.toString(t._1.get())
            val result = t._2
            val name = Bytes.toString(result.getValue("cf1".getBytes(), "name".getBytes()))
            (rowKey,name)
        }).foreach(t=> println(t))
        //5.关闭SparkContext
        sc.stop()
    }
}

提示

由于使⽤ habse-1.2.4 存在jar的不兼容问题,⽤户可以考虑升级驱动jar

<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>2.2.0</version>
</dependency>
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-server</artifactId>
    <version>2.2.0</version>
</dependency>
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-mapreduce</artifactId>
    <version>2.2.0</version>
</dependency>

RDD的操作

RDD支持两种类型的操作:

transformations-转换:将⼀个已经存在的RDD转换为⼀个新的RDD,另外⼀种称为actions-动作 ,动作算⼦⼀般在执⾏结束以后,会将结果返回给Driver。在Spark中所有的transformations 都是惰性(lazy)的,所有转换算⼦并不会⽴即执⾏,它们仅仅是记录对当前RDD的转换逻辑。仅当Actions 算⼦要求将结果返回给Driver程序时transformations 才开始真正的进⾏转换计算。这种设计使Spark可以更⾼效地运⾏。

默认情况下,每次在其上执⾏操作时,都可能会重新计算每个转换后的RDD。但是,您也可以使⽤persist(或缓存)⽅法将RDD保留在内存中,在这种情况下,Spark会将元素保留在群集中,以便下次查询时可以更快地进⾏访问。

scala> var rdd1=sc.textFile("hdfs:///demo/words/t_word",1).map(line=>line.split(" ").length)
rdd1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[117] at map at <console>:24
scala> rdd1.cache
res54: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[117] at map at <console>:24
scala> rdd1.reduce(_+_)
res55: Int = 15

rdd1.cache并不会立即缓存数据,而是在执行此转换的动作时将RDD的转换逻辑进行缓存。

Spark还⽀持将RDD持久存储在磁盘上,或在多个节点之间复制。⽐如⽤户可调⽤ persist(StorageLevel.DISK_ONLY_2) 将RDD存储在磁盘上,并且存储2份

Transformations-转换

map

将⼀个RDD[U] 转换为 RRD[T]类型。在转换的时候需要⽤户提供⼀个匿名函数 func: U => T

scala> var rdd:RDD[String]=sc.makeRDD(List("a","b","c","a"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[120] at makeRDD at
<console>:25
scala> val mapRDD:RDD[(String,Int)] = rdd.map(w => (w, 1))
mapRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[121] at map at
<console>:26
filter

将对⼀个RDD[U]类型元素进⾏过滤,过滤产⽣新的RDD[U],但是需要⽤户提供 func:U => Boolean 系
统仅仅会保留返回true的元素。

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[122] at makeRDD at
<console>:25
scala> val mapRDD:RDD[Int]=rdd.filter(num=> num %2 == 0)
mapRDD: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[123] at filter at
<console>:26
scala> mapRDD.collect
res63: Array[Int] = Array(2, 4)
flatMap

和map类似,也是将⼀个RDD[U] 转换为 RRD[T]类型。但是需要⽤户提供⼀个⽅法 func:U => Seq[T]

scala> var rdd:RDD[String]=sc.makeRDD(List("this is","good good"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[124] at makeRDD at
<console>:25
scala> var flatMapRDD:RDD[(String,Int)]=rdd.flatMap(line=> for(i<- line.split("\\s+")) yield (i,1))
flatMapRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[125] at flatMap at <console>:26
scala> var flatMapRDD:RDD[(String,Int)]=rdd.flatMap( line=>line.split("\\s+").map((_,1)))
flatMapRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[126] at flatMap at <console>:26
scala> flatMapRDD.collect
res64: Array[(String, Int)] = Array((this,1), (is,1), (good,1), (good,1))
mapPartitions

和map类似,但是该⽅法的输⼊时⼀个分区的全量数据,因此需要⽤户提供⼀个分区的转换⽅法:

func:Iterator => Iterator

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[128] at makeRDD at
<console>:25
scala> var mapPartitionsRDD=rdd.mapPartitions(values => values.map(n=>(n,n%2==0)))
mapPartitionsRDD: org.apache.spark.rdd.RDD[(Int, Boolean)] = MapPartitionsRDD[129] at
mapPartitions at <console>:26
scala> mapPartitionsRDD.collect
res70: Array[(Int, Boolean)] = Array((1,false), (2,true), (3,false), (4,true),
(5,false))

将一个 T 类型的RDD集合转换为一个 U 类型的RDD集合,并决定是否保留原来的分区

进行运算时根据分区取值并进行算子的运算

mapPartitionsWithIndex

和mapPartitions类似,但是该⽅法会提供RDD元素所在的分区编号。因此 func:(Int, Iterator)=> Iterator

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6),2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[139] at makeRDD at
<console>:25
scala> rdd.mapPartitionsWithIndex((p,values)=>values.map(t=>(t,p)))
res14: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[18] at mapPartitionsWithIndex at <console>:27
scala> rdd.mapPartitionsWithIndex((p,values)=>values.map(t=>(t,p))).collect
res15: Array[(Int, Int)] = Array((1,0), (2,0), (3,1), (4,1), (5,1))
sample( withReplacement , fraction , seed ) --了解

抽取RDD中的样本数据,可以通过 withReplacement :是否允许重复抽样、 fraction :控制抽样⼤致⽐例(0~1之间)、 seed :控制的是随机抽样过程中产⽣随机数。

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[150] at makeRDD at
<console>:25
scala> rdd.sample(false,0.5d,1L).collect
res91: Array[Int] = Array(1, 5, 6)

当抽样比例不变,seed不变,得到的抽样结果也不变

union( otherDataset )

是将两个同种类型的RDD的元素进⾏合并。

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[154] at makeRDD at
<console>:25
scala> var rdd2:RDD[Int]=sc.makeRDD(List(6,7))
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[155] at makeRDD at
<console>:25
scala> rdd.union(rdd2).collect
res95: Array[Int] = Array(1, 2, 3, 4, 5, 6, 6, 7)
intersection( otherDataset )

是将两个同种类型的RDD的元素进⾏计算交集。

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[154] at makeRDD at
<console>:25
scala> var rdd2:RDD[Int]=sc.makeRDD(List(6,7))
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[155] at makeRDD at
<console>:25
scala> rdd.intersection(rdd2).collect
res100: Array[Int] = Array(6)
distinct([ numPartitions ]))

去除RDD中重复元素,其中numPartitions 是⼀个可选参数,是否修改RDD的分区数,⼀般是在当数据集经过去重之后,如果数据量级⼤规模降低,可以尝试传递numPartitions 减少分区数。

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[154] at makeRDD at
<console>:25
scala> rdd.distinct(3).collect
res106: Array[Int] = Array(6, 3, 4, 1, 5, 2)
join

当调⽤RDD[(K,V)]和RDD[(K,W)]系统可以返回⼀个新的RDD[(k,(v,w))](默认内连接),⽬前⽀持leftOuterJoin, rightOuterJoin, 和 fullOuterJoin.

scala> var userRDD:RDD[(Int,String)]=sc.makeRDD(List((1,"zhangsan"),(2,"lisi")))
userRDD: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[204] at
makeRDD at <console>:25
scala> case class OrderItem(name:String,price:Double,count:Int)
defined class OrderItem
scala> var orderItemRDD:RDD[(Int,OrderItem)]=sc.makeRDD(List((1,OrderItem("apple",4.5,2))))
orderItemRDD: org.apache.spark.rdd.RDD[(Int, OrderItem)] = ParallelCollectionRDD[206]
at makeRDD at <console>:27
scala> userRDD.join(orderItemRDD).collect
res107: Array[(Int, (String, OrderItem))] = Array((1,(zhangsan,OrderItem(apple,4.5,2))))
scala> userRDD.leftOuterJoin(orderItemRDD).collect
res9: Array[(Int, (String, Option[OrderItem]))] = Array((1,(zhangsan,Some(OrderItem(apple,4.5,2)))), (2,(lisi,None)))
cogroup —了解

在(K,V)和(K,W)类型的数据集上调用时,返回(K,(Iterable ,Iterable ))元组的数据集。此操作也称为groupWith

scala> var userRDD:RDD[(Int,String)]=sc.makeRDD(List((1,"zhangsan"),(2,"lisi")))
userRDD: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[204] atmakeRDD at <console>:25
scala> var orderItemRDD:RDD[(Int,OrderItem)]=sc.makeRDD(List((1,OrderItem("apple",4.5,2)),(1,OrderItem("pear",1.5,2))))
orderItemRDD: org.apache.spark.rdd.RDD[(Int, OrderItem)] = ParallelCollectionRDD[215]at makeRDD at <console>:27
scala> userRDD.cogroup(orderItemRDD).collect
res110: Array[(Int, (Iterable[String], Iterable[OrderItem]))] = Array((1,
(CompactBuffer(zhangsan),CompactBuffer(OrderItem(apple,4.5,2),
OrderItem(pear,1.5,2)))), (2,(CompactBuffer(lisi),CompactBuffer())))
scala> userRDD.groupWith(orderItemRDD).collect
res119: Array[(Int, (Iterable[String], Iterable[OrderItem]))] = Array((1,
(CompactBuffer(zhangsan),CompactBuffer(OrderItem(apple,4.5,2),
OrderItem(pear,1.5,2)))), (2,(CompactBuffer(lisi),CompactBuffer())))
cartesian —了解

计算集合笛卡尔积

scala> var rdd1:RDD[Int]=sc.makeRDD(List(1,2,4))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[238] at makeRDD at<console>:25
scala> var rdd2:RDD[String]=sc.makeRDD(List("a","b","c"))
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[239] at makeRDD at<console>:25
scala> rdd1.cartesian(rdd2).collect
res120: Array[(Int, String)] = Array((1,a), (1,b), (1,c), (2,a), (2,b), (2,c), (4,a),
(4,b), (4,c))
coalesce

当经过⼤规模的过滤数据以后,可以使 coalesce 对RDD进⾏分区的缩放(只能减少分区,不可以增加)。

scala> var rdd1:RDD[Int]=sc.makeRDD(0 to 100)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[252] at makeRDD at<console>:25
scala> rdd1.getNumPartitions
res129: Int = 6
scala> rdd1.filter(n=> n%2 == 0).coalesce(3).getNumPartitions
res127: Int = 3
scala> rdd1.filter(n=> n%2 == 0).coalesce(12).getNumPartitions
res128: Int = 6
repartition

和coalesce 相似,但是该算⼦能够变⼤或者缩⼩RDD的分区数。

scala> var rdd1:RDD[Int]=sc.makeRDD(0 to 100)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[252] at makeRDD at<console>:25
scala> rdd1.getNumPartitions
res129: Int = 6
scala> rdd1.filter(n=> n%2 == 0).repartition(12).getNumPartitions
res130: Int = 12
scala> rdd1.filter(n=> n%2 == 0).repartition(3).getNumPartitions
res131: Int = 3

repartitionAndSortWithinPartitions —了解

该算⼦能够使⽤⽤户提供的 partitioner 实现对RDD中数据分区,然后对分区内的数据按照他们key进⾏排序。

scala> case class User(name:String,deptNo:Int)
defined class User
var empRDD:RDD[User]= sc.parallelize(List(User("张三",1),User("lisi",2),User("wangwu",1)))
empRDD.map(t => (t.deptNo, t.name)).repartitionAndSortWithinPartitions(new Partitioner
{
 override def numPartitions: Int = 4
 override def getPartition(key: Any): Int = {
 key.hashCode() & Integer.MAX_VALUE % numPartitions
 }
}).mapPartitionsWithIndex((p,values)=> {
 println(p+"\t"+values.mkString("|"))
 values
}).collect()

思考
1、如果有两个超⼤型⽂件需要join,有何优化策略?

将大文件根据id分组转换为小文件存储在文件目录中,在将小文件进行join。

xxx-ByKey算子

在Spark中专⻔针对RDD[(K,V)]类型数据集提供了xxxByKey算⼦实现对RDD[(K,V)]类型针对性实现计算。

groupByKey([ numPartitions ])

类似于MapReduce计算模型。将RDD[(K, V)]转换为RDD[ (K, Iterable)]

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at
<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).groupByKey.collect
res3: Array[(String, Iterable[Int])] = Array((this,CompactBuffer(1)),(is,CompactBuff)),(good,CompactBuffer(1, 1)))
groupBy(f:(k,v)=> T)

根据匿名函数中指定的字段进行分组,结果是一个元组(指定的字段,(原来的元组))

scala> var lines=sc.parallelize(List("this is good good"))
scala> lines.flatMap(_.split("\\s+")).map((_,1)).groupBy(t=>t._1)
res5: org.apache.spark.rdd.RDD[(String, Iterable[(String, Int)])] = ShuffledRDD[18] at
groupBy at <console>:26
scala> lines.flatMap(_.split("\\s+")).map((_,1)).groupBy(t=>t._1).map(t=>
(t._1,t._2.size)).collect
res6: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
reduceByKey( func , [ numPartitions ])

在(K,V)对的数据集上调用时,返回(K,V)对的数据集,其中每个键的值使用给定的reduce函数func进行汇总。

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).reduceByKey(_+_).collect
res8: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
aggregateByKey( zeroValue )( seqOp , combOp , [ numPartitions ])
scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).collect
res9: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
sortByKey([ ascending ], [ numPartitions ])

sortByKey (true|false):决定是顺序还是倒序

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).sortByKey(true).collect
res13: Array[(String, Int)] = Array((good,2), (is,1), (this,1))
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).sortByKey(false).collect
res14: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
sortBy(T=>U,ascending,[ numPartitions ])

sortBy(给定排序参数,true|false)

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).sortBy(_._2,false).collect
res18: Array[(String, Int)] = Array((good,2), (this,1), (is,1))
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)
(_+_,_+_).sortBy(t=>t,false).collect
res19: Array[(String, Int)] = Array((this,1), (is,1), (good,2))

Action-动作

Spark任何一个计算任务,有且仅有一个动作算子,用于触发job的执行。将RDD中的数据写出到外围系统或Driver的主程序。

reduce(func )

该算⼦能够对远程结果进⾏计算,然后将计算结果返回给Driver。

例如:计算⽂件中的字符数。

scala> sc.textFile("file:///root/t_word").map(_.length).reduce(_+_)
res3: Int = 64
collect()

将远程RDD中数据传输给Driver端。通常⽤于测试环境或者RDD中数据⾮常的⼩的情况才可以使⽤Collect算⼦,否则Driver可能因为数据太⼤导致内存溢出。

scala> sc.textFile("file:///root/t_word").collect
res4: Array[String] = Array(this is a demo, hello spark, "good good study ", "day dayup ", come on baby)
foreach(func )

在数据集的每个元素上运⾏函数func。通常这样做是出于副作⽤,例如更新累加器或与 外部存储系统 交互。

在RDD数据集上运行函数,不需要传输数据。

scala> sc.textFile("file:///root/t_word").foreach(line=>println(line))
count()

返回RDD中元素的个数,类似于数组的size 或 length

scala> sc.textFile("file:///root/t_word").count()
res7: Long = 5
first()|take( n )

获取RDD中的数据,first:第一个数据,take(n):前n个数据

scala> sc.textFile("file:///root/t_word").first
res9: String = this is a demo
scala> sc.textFile("file:///root/t_word").take(1)
res10: Array[String] = Array(this is a demo)
scala> sc.textFile("file:///root/t_word").take(2)
res11: Array[String] = Array(this is a demo, hello spark)
takeSample( withReplacement , num , [ seed ])

随机的从RDD中采样num个元素,并且将采样的元素返回给Driver主程序。因此这和sample转换算⼦有很⼤的区别。

与sample算子的区别:

takesample是一个动作算子,返回一个array数组,接收参数的第二个表示随机获取几个元素,而sample是获取元素的比例,因为takesample需要数据传输,所以数组传输的数值不能过大。

scala> sc.textFile("file:///root/t_word").takeSample(false,2)
res20: Array[String] = Array("good good study ", hello spark)
takeOrdered( n , [ordering] )

返回RDD中前N个元素,⽤户可以指定⽐较规则

scala> case class User(name:String,deptNo:Int,salary:Double)
defined class User
scala> var userRDD=sc.parallelize(List(User("zs",1,1000.0),User("ls",2,1500.0),User("ww",2,1000.0)))
userRDD: org.apache.spark.rdd.RDD[User] = ParallelCollectionRDD[51] at parallelize at<console>:26
scala> userRDD.takeOrdered
 def takeOrdered(num: Int)(implicit ord: Ordering[User]): Array[User]
scala> userRDD.takeOrdered(3)           //直接调用会报错,scala默认对8中基本类型实现ordering排序方法
<console>:26: error: No implicit Ordering defined for User.
 userRDD.takeOrdered(3)
scala> implicit var userOrder=new Ordering[User]{
 | override def compare(x: User, y: User): Int = {
 | if(x.deptNo!=y.deptNo){
 | x.deptNo.compareTo(y.deptNo)
 | }else{
 | x.salary.compareTo(y.salary) * -1
 | }
 | }
 | }
userOrder: Ordering[User] = $anon$1@7066f4bc
scala> userRDD.takeOrdered(3)
res23: Array[User] = Array(User(zs,1,1000.0), User(ls,2,1500.0), User(ww,2,1000.0))
saveAsTextFile( path )

Spark会调⽤RDD中元素的toString⽅法将元素以⽂本⾏的形式写⼊到⽂件中。

scala> sc.textFile("file:///root/t_word").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._1,true,1).map(t=>t._1+"\t"+t._2).saveAsTextFile("hdfs:///demo/results01")
saveAsSequenceFile( path )

该⽅法只能⽤于RDD[(k,v)]类型。并且K/v都必须实现Writable接⼝,由于使⽤Scala编程,Spark已经实现隐式转换将Int, Double, String, 等类型可以⾃动的转换为Writable

scala> sc.textFile("file:///root/t_word").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._1,true,1).saveAsSequenceFile("hdfs:///demo/results02")

spark读取二进制文件

scala> sc.sequenceFile[String,Int]("hdfs:///demo/results03").collect
res29: Array[(String, Int)] = Array((a,1), (baby,1), (come,1), (day,2), (demo,1),(good,2),(hello,1), (is,1), (on,1), (spark,1), (study,1), (this,1), (up,1))

在hadoop中读取二进制文件

[root@centos ~]# hdfs dfs -cat /demp/result02/*             //cat读二进制文件有乱码
SEQorg.apache.hadoop.io.Text org.apache.hadoop.io.IntWritableh|▒▒▒Ӑ▒i.▒W"▒day   demo    goodis
study   thisupxterm-256colorxterm-256colorxterm-256colorxterm-256color[root@centos ~]# xterm-256colorxterm-256colorxterm-256colorxterm-256color^C
[root@centos ~]# hdfs dfs -text /demp/result02/*             //text读二进制文件可自动转换
a       1
day     2
demo    1
good    2
is      1
study   1
this    1
up      1

共享变量

当RDD中的转换算子需要使用定义在driver中的变量的时候,计算节点在运行该算子之前,会通过网络将Driver中定义的变量下载到计算节点。如果计算节点修改了下载的变量,此改变对Driver中定义的变量不可见。

scala> var i:Int=0
i: Int = 0
scala> sc.textFile("file:///root/t_word").foreach(line=> i=i+1)
scala> print(i)
0

每个计算节点中的线程使用Driver中的变量时,下载一次

⼴播变量

问题:
当出现超⼤数据集和⼩数据集合进⾏连接的时候,能否使⽤join算⼦直接进⾏jion,如果不⾏为什么?

//100GB
var orderItems=List("001 apple 2 4.5","002 pear 1 2.0","001 ⽠⼦ 1 7.0")
//10MB
var users=List("001 zhangsan","002 lisi","003 王五")
var rdd1:RDD[(String,String)] =sc.makeRDD(orderItems).map(line=>(line.split(" ")(0),line))
var rdd2:RDD[(String,String)] =sc.makeRDD(users).map(line=>(line.split(" ")(0),line))
rdd1.join(rdd2).collect().foreach(println)

系统在做join的操作的时候会产⽣shu!le,会在各个计算节点当中传输100GB的数据⽤于完成join操作,因此join⽹络代价和内存代价都很⾼。因此可以考虑将⼩数据定义成Driver中成员变量,在Map操作的时候完成join。

scala> var users=List("001 zhangsan","002 lisi","003 王五").map(line=>line.split("
")).map(ts=>ts(0)->ts(1)).toMap
users: scala.collection.immutable.Map[String,String] = Map(001 -> zhangsan, 002 ->lisi, 003 -> 王五)
scala> var orderItems=List("001 apple 2 4.5","002 pear 1 2.0","001 ⽠⼦ 1 7.0")
orderItems: List[String] = List(001 apple 2 4.5, 002 pear 1 2.0, 001 ⽠⼦ 1 7.0)
scala> var rdd1:RDD[(String,String)] =sc.makeRDD(orderItems).map(line=>(line.split("
")(0),line))
rdd1: org.apache.spark.rdd.RDD[(String, String)] = MapPartitionsRDD[89] at map at
<console>:32
scala> rdd1.map(t=> t._2+"\t"+users.get(t._1).getOrElse("未知")).collect()
res33: Array[String] = Array(001 apple 2 4.5 zhangsan, 002 pear 1 2.0 lisi,
001 ⽠⼦ 1 7.0 zhangsan)

但是上⾯写法会存在⼀个问题,每当⼀个map算⼦遍历元素的时候都会向Driver下载userMap变量,虽然该值不⼤,但是在计算节点会频繁的下载。正是因为此种情景会导致没有必要的重复变量的拷⻉,Spark提出⼴播变量。

Spark 在程序运⾏前期,提前将需要⼴播的变量通知给所有的计算节点,计算节点会对需要⼴播的变量在计算之前进⾏下载操作并且将该变量缓存,该计算节点其他线程在使⽤到该变量的时候就不需要下载。

//100GB
var orderItems=List("001 apple 2 4.5","002 pear 1 2.0","001 ⽠⼦ 1 7.0")
//10MB 声明Map类型变量
var users:Map[String,String]=List("001 zhangsan","002 lisi","003 王五").map(line=>line.split(" ")).map(ts=>ts(0)->ts(1)).toMap
//声明⼴播变量,调⽤value属性获取⼴播值
val ub = sc.broadcast(users)                //声明广播变量
var rdd1:RDD[(String,String)] =sc.makeRDD(orderItems).map(line=>(line.split(" ")(0),line))
rdd1.map(t=> t._2+"\t"+ub.value.get(t._1).getOrElse("未知")).collect().foreach(println)
计数器

Spark提供的Accumulator,主要⽤于多个节点对⼀个变量进⾏共享性的操作。Accumulator只提供了累加的功能。但是却给我们提供了多个task对⼀个变量并⾏操作的功能。但是task只能对Accumulator进⾏累加操作,不能读取它的值。只有Driver程序可以读取Accumulator的值。

scala> val accum = sc.longAccumulator("mycount")
accum: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 1075, name:Some(mycount), value: 0)
scala> sc.parallelize(Array(1, 2, 3, 4),6).foreach(x => accum.add(x))
scala> accum.value
res36: Long = 10

Spark数据写出

将数据写出到hdfs
scala>
sc.textFile("file:///root/t_word").flatMap(_.split("")).map((_,1)).reduceByKey(_+_).sortBy(_._1,true,1).saveAsSequenceFile("hdfs:///demo/results03")

因为saveASxxx都是将计算结果写⼊到HDFS或者是本地⽂件系统中,因此如果需要 将计算结果写出到第三⽅数据此时就需要借助于spark给我们提供的⼀个算⼦ foreach 算⼦写出。

foreach写出

场景1:频繁的打开和关闭链接,写⼊效率很低(可以运⾏成功的)

sc.textFile("file:///root/t_word")
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.sortBy(_._1,true,3)
.foreach(tuple=>{ //数据库
    //1,创建链接
    //2.开始插⼊
    //3.关闭链接
})

场景2:错误写法,因为链接池不可能被序列化(运⾏失败)

//1.定义连接Connection
var conn=... //定义在Driver
sc.textFile("file:///root/t_word")
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.sortBy(_._1,true,3)
.foreach(tuple=>{ //数据库
    //2.开始插⼊
})
//3.关闭链接

由于连接代表的是一种状态,不可被序列化

场景3:⼀个分区⼀个链接池?(还不错,但是不是最优),有可能⼀个JVM运⾏多个分区,也就意味着⼀个JVM创建多个链接造成资源的浪费。单例对象?

将创建链接代码使⽤单例对象创建,如果⼀个计算节点拿到多个分区。通过JVM单例定义可以知道,在整个JVM中仅仅只会创建⼀次。

在JVM中,一个单例对象只会被加载一次

sc.textFile("file:///root/t_word")
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.sortBy(_._1,true,3)
.foreachPartition(values=>{
    //创建链接
    //写⼊分区数据
    //关闭链接
})
val conf = new SparkConf()
.setMaster("local[*]")
.setAppName("SparkWordCountApplication")
val sc = new SparkContext(conf)
sc.textFile("hdfs://CentOS:9000/demo/words/")
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.sortBy(_._1,true,3)
.foreachPartition(values=>{
    HbaseSink.writeToHbase("baizhi:t_word",values.toList)
})
sc.stop()
package com.baizhi.sink
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.{HConstants, TableName}
import org.apache.hadoop.hbase.client.{Connection, ConnectionFactory, Put}
import scala.collection.JavaConverters._
object HbaseSink {
    //定义链接参数
    private var conn:Connection=createConnection()
    def createConnection(): Connection = {
        val hadoopConf = new Configuration()
        hadoopConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
        return ConnectionFactory.createConnection(hadoopConf)
    }
    def writeToHbase(tableName:String,values:List[(String,Int)]): Unit ={
        var tName:TableName=TableName.valueOf(tableName)
        val mutator = conn.getBufferedMutator(tName)
        var scalaList=values.map(t=>{
            val put = new Put(t._1.getBytes())
            put.addColumn("cf1".getBytes(),"count".getBytes(),(t._2+" ").getBytes())
            put
        })
        //批量写出
        mutator.mutate(scalaList.asJava)
        mutator.flush()
        mutator.close()
    }
    //监控JVM退出,如果JVM退出系统回调该⽅法
    Runtime.getRuntime.addShutdownHook(new Thread(new Runnable {
        override def run(): Unit = {
            println("-----close----")
            conn.close()
        }
    }))
}

猜你喜欢

转载自blog.csdn.net/origin_cx/article/details/104414760
今日推荐