The role and use of Spark broadcast variables

The role of broadcast variables

  • Broadcast variable: Distributed read-only variable.
  • If the Executor side needs to access a variable on the Driver side, spark will send a copy of this variable to each task on the Executor side. If this variable is large, it will occupy a large amount of memory of the Executor node.
  • Using broadcast variables, spark will only send one variable to one Executor node.

Use of broadcast variables

demand

        A List and an RDD achieve the effect similar to the join operator.

object Spark08_Broadcast {
    
    
  def main(args: Array[String]): Unit = {
    
    
    val conf: SparkConf = new SparkConf().setAppName(this.getClass.getName).setMaster("local[*]")
    val sc = new SparkContext(conf)
    val list1: List[(String, Int)] = List(("a",1),("b",2),("c",2))
    val list2: RDD[(String, Int)] = sc.makeRDD(List(("a",3),("b",4),("c",5)))

    // 启用广播变量
    val broadList: Broadcast[List[(String, Int)]] = sc.broadcast(list1)

    // join两个数据,结构(key,(value1,value2))
    val resRDD: RDD[(String, (Int, Int))] = list2.map {
    
    
      case (word, count) => {
    
    
        // 定义临时变量,保存相同key对应的value
        var v3 = 0
        // 获取广播变量中的值
        val broadValue: List[(String, Int)] = broadList.value
        for (w <- broadValue) {
    
    
          if (w._1 == word) {
    
    
            v3 = w._2
          }
        }
        (word, (count, v3))
      }
    }
    resRDD.foreach(println)

    sc.stop()
  }
}

Guess you like

Origin blog.csdn.net/FlatTiger/article/details/115134300