Cómo detener correctamente el servicio de transmisión Spark [una pieza es suficiente]

Consulte algunos de los materiales consultados en Internet, combinados con su propia práctica, organícelos de la siguiente manera:

Método 1: establecer un interruptor en el programa para detener el servicio (recomendado)

Cuando se detecta el directorio " / user / root / stop_sparkStreaming " en HDFS, el servicio se detiene correctamente

package com.sjmz.sparkdemo

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * To run this on your local machine, you need to first run a Netcat server
  * nc -lk 9999
  * and then run the example
  * spark-submit --class com.sjmz.sparkdemo.SparkStreamWordCount localhost 9999 xxx.jar
  */
object SparkStreamWordCount {
  val shutdownMarker = "/user/root/stop_sparkStreaming"
  // flag to stop the spark streaming service
  var stopFlag: Boolean = false

  def main(args: Array[String]): Unit = {
    if (args.length < 2) {
      System.err.println("Usage: SparkStreamWordCount <hostname> <port>")
      System.exit(1)
    }

    // Create a local StreamingContext with two working thread and batch interval of 10 second.
    // The master requires 2 cores to prevent a starvation scenario.
    val conf = new SparkConf().setMaster("local[2]").setAppName("SparkStream_NetworkWordCount")
    val ssc = new StreamingContext(conf, Seconds(10))

    // Create a DStream that will connect to hostname:port, like localhost:9999
    val lines = ssc.socketTextStream(args(0), args(1).toInt)

    // Split each line into words
    val words = lines.flatMap(_.split(" "))

    // Count each word in each batch
    val pairs = words.map(word => (word, 1))
    val wordCounts = pairs.reduceByKey(_ + _)

    // Print the first ten elements of each RDD generated in this DStream to the console
    wordCounts.print()

    // Start the computation
    ssc.start()

    val checkIntervalMillis = 10000
    var isStopped: Boolean = false
    while (!stopFlag) {
      isStopped = ssc.awaitTerminationOrTimeout(checkIntervalMillis)
      if (isStopped) {
        println("WARNING!!! The spark streaming context is stopped. Exiting application ......")
      } else {
        println("spark streaming is still running......")
      }

      toShutDown_SparkStreaming
      if (!isStopped && stopFlag) {
        println("======> to stop ssc right now")
        //第一个true:停止相关的SparkContext。无论这个流媒体上下文是否已经启动,底层的SparkContext都将被停止
        //第二个true:等待所有接收到的数据的处理完成,然后优雅地停止
        ssc.stop(true, true)
        println("<====== ssc is stopped !!!")
      }
    }
  }

  def toShutDown_SparkStreaming = {
    if (!stopFlag) {
      // 检查是否要停止spark streaming service
      val fs = FileSystem.get(new Configuration())
      // 如果shutdownMarker目录存在,则停止服务
      stopFlag = fs.exists(new Path(shutdownMarker))
    }
  }
}

Método 2: establezca los siguientes parámetros en sparkConf, busque el trabajo para ejecutar a través de la página de la interfaz de usuario de Spark y haga clic en matar

sparkConf.set ("spark.streaming.stopGracefullyOnShutdown", "true") 

Supongo que te gusta

Origin blog.csdn.net/sjmz30071360/article/details/86038393
Recomendado
Clasificación