大数据运维Spark

Spark 题:

  1. 在先电大数据平台部署 Spark 服务组件,打开 Linux Shell 启动 spark-shell终端,将启动的程序进程信息以文本形式提交到答题框中。
    [root@master ~]# spark-shell
    Setting default log level to “WARN”.
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Spark context Web UI available at http:// 10.0.0.100:4040
    Spark context available as ‘sc’ (master = local[*], app id = local-1558619534984).
    Spark session available as ‘spark’.
    Welcome to


    / / ___ / /
    \ / _ / _ `/ __/ '/
    /
    / .__/_,// //_\ version 2.1.1.2.6.1.0-129
    /
    /

    Using Scala version 2.11.8 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_77)
    Type in expressions to have them evaluated.
    Type :help for more information.

    scala>

  2. 启动 spark-shell 后,在 scala 中加载数据“1,2,3,4,5,6,7,8,9,10”,求这些数据的 2 倍乘积能够被 3 整除的数字,并通过 toDebugString 方法来查看 RDD 的谱系。

    scala> val num = sc.parallelize(1 to 10)
    num: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at :24
    scala> val doublenum = num.map(_*2)
    doublenum: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at map at :26
    scala> val threenum = doublenum.filter(_%3==0)
    threenum: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[2] at filter at :28
    scala> threenum.collect
    res1: Array[Int] = Array(6, 12, 18)
    scala>treenum.toDebugString
    res2: String =
    (4) MapPartitionsRDD[2] at filter at :28 []
    | MapPartitionsRDD[1] at map at :26 []
    | ParallelCollectionRDD[0] at parallelize at :24 []

  3. 启动 spark-shell 后,在 scala 中加载 Key-Value 数据(“A”,1),(“B”,2),(“C”,3),(“A”,4),(“B”,5),(“C”,4),(“A”,3),(“A”,9),(“B”,4),(“D”,5),将这些数据以 Key 为基准进行升序排序,并以 Key 为基准进行分组。
    scala> val kv = sc.parallelize(List(("A",1),("B",2),("C",3),("A",4),("B",5),("C",4),("A",3),("A",9),("B",4),("D",5)))
    kv1: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[3] at parallelize at :24
    scala> val kv1 = kv.sortByKey().collect
    res3: Array[(String, Int)] = Array((A,1), (A,4), (A,3), (A,9), (B,2), (B,5), (B,4), (C,3), (C,4), (D,5))

    scala> kv1.groupByKey().collect
    res4: Array[(String, Iterable[Int])] = Array((D,CompactBuffer(5)), (A,CompactBuffer(1, 4, 3, 9)), (B,CompactBuffer(2, 5, 4)), (C,CompactBuffer(3, 4)))

  4. 启动 spark-shell 后,在 scala 中加载 Key-Value 数据(“A”,1),(“B”,3),(“C”,5),(“D”,4), (“B”,7), (“C”,4), (“E”,5), (“A”,8), (“B”,4), (“D”,5),将这些数据以 Key 为基准进行升序排序,并对相同的 Key进行 Value 求和计算。
    scala> val kv2 = sc.parallelize(List(("A",1),("B",3),("C",5),("D",4), ("B",7), ("C",4), ("E",5), ("A",8), ("B",4), ("D",5)))
    kv2: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[8] at parallelize at :24
    scala> val kv3 = kv2.sortByKey().collect
    res5: Array[(String, Int)] = Array((A,1), (A,8), (B,3), (B,7), (B,4), (C,5), (C,4), (D,4), (D,5), (E,5))
    scala>kv3.reduceByKey(_+_).collect
    res8: Array[(String, Int)] = Array((D,9), (A,9), (E,5), (B,14), (C,9))

  5. 启动 spark-shell 后,在 scala 中加载 Key-Value 数据(“A”,4),(“A”,2),(“C”,3),(“A”,4),(“B”,5),(“C”,3),(“A”,4),以 Key 为基准进行去重操作,并通过 toDebugString 方法来查看 RDD 的谱系。
    scala> val kv3 = sc.parallelize(List(("A",4),("A",2),("C",3),("A",4),("B",5),("C",3),("A",4)))
    kv3: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[13] at parallelize at :24
    scala>kv3.distinct.collect
    res9: Array[(String, Int)] = Array((A,4), (B,5), (A,2), (C,3))
    scala> kv3.toDebugString
    res10: String = (4) ParallelCollectionRDD[13] at parallelize at :24 []

  6. 启动 spark-shell 后,在 scala 中加载两组 Key-Value 数据(“A”,1),(“B”,2),(“C”,3),(“A”,4),(“B”,5)、(“A”,1),(“B”,2),(“C”,3),(“A”,4),(“B”,5),将两组数据以 Key 为基准进行 JOIN 操作。
    scala> val kv4 = sc.parallelize(List(("A",1),("B",2),("C",3),("A",4),("B",5)))
    kv4: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[17] at parallelize at :24

    scala> val kv5 = sc.parallelize(List(("A",1),("B",2),("C",3),("A",4),("B",5)))
    kv5: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[18] at parallelize at :24

    scala> kv4.join(kv5).collect
    res11: Array[(String, (Int, Int))] = Array((A,(1,1)), (A,(1,4)), (A,(4,1)), (A,(4,4)), (B,(2,2)), (B,(2,5)), (B,(5,2)), (B,(5,5)), (C,(3,3)))

  7. 登录 spark-shell,定义 i 值为 1,sum 值为 0,使用 while 循环,求从 1 加到 100 的值,最后使用 scala 的标准输出函数输出 sum 值。
    scala>var i = 1
    i: Int = 1
    scala>var sum = 0
    sum: Int = 0
    scala> while(i<=100){sum+=i;i+=1;}
    scala> print(sum)
    5050

  8. 登录 spark-shell,定义 i 值为 1,sum 值为 0,使用 for 循环,求从 1 加到100 的值,最后使用 scala 的标准输出函数输出 sum 值。
    scala>var i = 1
    i: Int = 1

    scala> var sum =0
    sum: Int = 0

    scala> for(i<-1 to 100) sum+=i

    scala> print(sum)
    5050

  9. 登录 spark-shell,定义变量 i、sum,并赋 i 初值为 1、sum 初值为 0、步长为 3,使用 while 循环,求从 1 加到 2018 的值,最后使用 scala 的标准输出函数输出 sum 值。
    scala>var i =1
    i: Int = 1

    scala> var sum = 0
    sum: Int = 0

    scala> while(i<=2018){sum+=i;i+=3}

    scala> print(sum)
    679057

10.任何一种函数式语言中,都有 map 函数与 faltMap 这两个函数:map 函数的用法,顾名思义,将一个函数传入 map 中,然后利用传入的这个函数,将集合中的每个元素处理,并将处理后的结果返回。而flatMap与map唯一不一样的地方就是传入的函数在处理完后返回值必须是 List,所以需要返回值是 List 才能执行 flat 这一步。
(1)登录 spark-shell,自定义一个 list,然后利用 map 函数,对这个 list 进行元素乘 2 的操作。
(2)登录 spark-shell,自定义一个 list,然后利用 flatMap 函数将 list 转换为单个字母并转换为大写。
scala>import scala.math._
import scala.math._
scala>val num = List(1,2,3,4,5,6,7,8,9)
num: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> num.map(x=>x*2)
res6: List[Int] = List(2, 4, 6, 8, 10, 12, 14, 16, 18)
scala>val data = List("hadoop","Pig","Hive")
data: List[String] = List(hadoop, Pig, Hive)
scala>data.flatMap(_.toUpperCase)
res7: List[Char] = List(H, A, D, O, O, P, P, I, G, H, I, V, E)

11.登录大数据云主机 master 节点,在 root 目录下新建一个 abc.txt,里面的内容为:
hadoop hive
solr redis
kafka hadoop
storm flume
sqoop docker
spark spark
hadoop spark
elasticsearch hbase
hadoop hive
spark hive
hadoop spark
然后登录 spark-shell,首先使用命令统计 abc.txt 的行数,接着对 abc.txt 文档中的单词进行计数,并按照单词首字母的升序进行排序,最后统计结果行数。
[root@master ~]#cat abc.txt
hadoop hive
solr redis
kafka hadoop
storm flume
sqoop docker
spark spark
hadoop spark
elasticsearch hbase
hadoop hive
spark hive
hadoop spark
[root@master ~]# spark-shell
Setting default log level to “WARN”.
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http:// 10.0.0.100:4040
Spark context available as ‘sc’ (master = local[*], app id = local-1558625246380).
Spark session available as ‘spark’.
Welcome to
____ __
/ / ___ / /
\ / _ / _ `/ __/ '/
/
/ .__/_,// //_\ version 2.1.1.2.6.1.0-129
/
/

Using Scala version 2.11.8 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_77)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val file = sc.textFile("file:///root/abc.txt")
file: org.apache.spark.rdd.RDD[String] = file:///root/abc.txt MapPartitionsRDD[1] at textFile at :24
scala> file.count()
res0: Long = 11
scala> val a = file.flatMap(line => line.split(" "))
a: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at flatMap at :26
scala>val b = a.map(word => (word,1))
b: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at map at :28
scala>val c = b.reduceByKey(_+_)
c: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at :30
scala>c.collect
res1: Array[(String, Int)] = Array((hive,3), (docker,1), (solr,1), (kafka,1), (sqoop,1), (spark,5), (hadoop,5), (flume,1), (storm,1), (elasticsearch,1), (redis,1), (hbase,1))
scala> val d = c.sortByKey(true)
d: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[7] at sortByKey at :32
scala> d.collect
res2: Array[(String, Int)] = Array((docker,1), (elasticsearch,1), (flume,1), (hadoop,5), (hbase,1), (hive,3), (kafka,1), (redis,1), (solr,1), (spark,5), (sqoop,1), (storm,1))

  1. 登录 spark-shell,自定义一个 List,使用 spark 自带函数对这个 List 进行去重操作。
    [root@master ~]# spark-shell
    Setting default log level to “WARN”.
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Spark context Web UI available at http:// 10.0.0.100:4040
    Spark context available as ‘sc’ (master = local[*], app id = local-1558625246380).
    Spark session available as ‘spark’.
    Welcome to


    / / ___ / /
    \ / _ / _ `/ __/ '/
    /
    / .__/_,// //_\ version 2.1.1.2.6.1.0-129
    /
    /

    Using Scala version 2.11.8 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_77)
    Type in expressions to have them evaluated.
    Type :help for more information.
    scala>val data = sc.parallelize(List(1,2,4,5,32,1,5,5,4,6,8,1,3))
    data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[24] at parallelize at :24
    scala>data.distinct.collect
    res6: Array[Int] = Array(4, 32, 8, 1, 5, 6, 2, 3)

  2. 登录“spark-shell”交互界面。给定数据,使用 spark 工具,统计每个日期新增加的用户数,最后显示统计结果。
    [root@master ~]# spark-shell
    Setting default log level to “WARN”.
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Spark context Web UI available at http:// 10.0.0.100:4040
    Spark context available as ‘sc’ (master = local[*], app id = local-1558657678873).
    Spark session available as ‘spark’.
    Welcome to


    / / ___ / /
    \ / _ / _ `/ __/ '/
    /
    / .__/_,// //_\ version 2.1.1.2.6.1.0-129
    /
    /

    Using Scala version 2.11.8 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_77)
    Type in expressions to have them evaluated.
    Type :help for more information.
    scala>val data = spark.sparkContext.parallelize(Array(("2017-01-01","a"),("2017-01-01","f"),("2017-01-01","g"),("2017-01-02","h"),("2017-01-02","j"),("2017-01-02","k"),("2017-01-02","l"),("2017-01-03","x"),("2017-01-03","y"),("2017-01-03","z")))
    data: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[15] at parallelize at :23
    scala>val date = data.map(kv => (kv._2,kv._1))
    date: org.apache.spark.rdd.RDD[(String, String)] = MapPartitionsRDD[16] at map at :25
    scala> val date1 = date.groupByKey()
    date1: org.apache.spark.rdd.RDD[(String, Iterable[String])] = ShuffledRDD[18] at groupByKey at :27
    scala> val date2 = date1.map(kv => (kv._2.min,1))
    date2: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[20] at map at :29
    scala> date2.countByKey().foreach(println)
    (2017-01-01,3)
    (2017-01-02,4)
    (2017-01-03,3)

  3. 登录“spark-shell”交互界面。定义一个函数,函数的作用是比较传入的两个变量,返回大的那个。
    [root@master ~]#spark-shell
    Setting default log level to “WARN”.
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Spark context Web UI available at http:// 10.0.0.100:4040
    Spark context available as ‘sc’ (master = local[*], app id = local-1558657678873).
    Spark session available as ‘spark’.
    Welcome to


    / / ___ / /
    \ / _ / _ `/ __/ '/
    /
    / .__/_,// //_\ version 2.1.1.2.6.1.0-129
    /
    /

    Using Scala version 2.11.8 (Java HotSpot™ 64-Bit Server VM, Java 1.8.0_77)
    Type in expressions to have them evaluated.
    Type :help for more information.
    scala> def max(a:Int,b:Int) = if (a>b) a else b
    max: (a: Int, b: Int)Int
    scala> var x = 66
    x: Int = 66
    scala> var y = 88
    y: Int = 88
    scala>max(x,y)
    res1: Int = 88

猜你喜欢

转载自blog.csdn.net/mn525520/article/details/93781426