Spark-Core源码学习记录 0.集群的启动脚本及launcher.Main作用

Spark-Core源码学习记录

该系列作为Spark源码回顾学习的记录,旨在捋清Spark分发程序运行的机制和流程,对部分关键源码进行追踪,争取做到知其所以然,对枝节部分源码仅进行文字说明,不深入下钻,避免混淆主干内容。
本文为Spark源码系列的开篇,主要内容包括对集群启动相关脚本的流程分析。

Spark配置文件

本文以官方的spark安装方式为基础,CDH及其他版本可自行查阅相关配置文件位置。

$SPARK_HOME/conf/
├── log4j.properties.template
├── slaves
├── spark-defaults.conf
└── spark-env.sh

spark-defaults.confspark-env.sh 中配置了一些默认属性,在slaves中配置了 worker的hostname。

Spark启动脚本

$SPARK_HOME/sbin/							$SPARK_HOME/bin/
├── start-all.sh							├── spark-class
├── spark-config.sh							├── spark-shell
├── start-master.sh							├── spark-submit
├── spark-daemon.sh
├── start-slaves.sh
└── start-slave.sh

首先来看 start-all.sh 内容,仅仅是加载 spark-config.sh 的内容,同时分别调用 start-master.shstart-slaves.sh

#!/usr/bin/env bash
if [ -z "${SPARK_HOME}" ]; then
  export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
# Load the Spark configuration
. "${SPARK_HOME}/sbin/spark-config.sh"
# Start Master
"${SPARK_HOME}/sbin"/start-master.sh
# Start Workers
"${SPARK_HOME}/sbin"/start-slaves.sh

先来看 start-master.sh 的内容,可以看到就是调用 spark-daemon.sh,并传入一些参数,关键参数为CLASS

#!/usr/bin/env bash   省略部分源码,仅保留主干内容
# NOTE: This exact class name is matched downstream by SparkSubmit.
# Any changes need to be reflected there.
CLASS="org.apache.spark.deploy.master.Master"
# 默认初始化端口 SPARK_MASTER_PORT=7077 #SPARK_MASTER_WEBUI_PORT=8080
# SPARK_MASTER_HOST="`hostname -f`"
"${SPARK_HOME}/sbin"/spark-daemon.sh start $CLASS 1 \
  --host $SPARK_MASTER_HOST --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT \
  $ORIGINAL_ARGS

进入 spark-daemon.sh,可以看到最终将所有参数传递给了"${SPARK_HOME}"/bin/spark-class,daemon脚本内部会区别submit和start及stop等不同操作,作为一个统一的入口。根据注释# Runs a Spark command as a daemon.可以知道作为一个daemon(守护线程)来运行。

#!/usr/bin/env bash   仅保留部分关键逻辑代码
# Runs a Spark command as a daemon.
option=$1   #option等于上面代码传入的第一个参数start
case $option in
  (submit)
    run_command submit "$@"
  (start)   #匹配到此处
    run_command class "$@"
run_command() {
  mode="$1"  #mode=class
  case "$mode" in
    (class)
      execute_command nice -n "$SPARK_NICENESS" "${SPARK_HOME}"/bin/spark-class "$command" "$@"
    (submit)
      execute_command nice -n "$SPARK_NICENESS" bash "${SPARK_HOME}"/bin/spark-submit --class "$command" "$@"
}

下面查看/bin/spark-class内容,简单来看最终就是运行了java -Xmx128m -cp $PATH org.apache.spark.launcher.Main "$@",至此将进入spark源码,放在后面进行,先走完启动脚本"${SPARK_HOME}/sbin"/start-slaves.sh

#!/usr/bin/env bash 
SPARK_JARS_DIR="${SPARK_HOME}/jars
LAUNCH_CLASSPATH="$SPARK_JARS_DIR/*"
LAUNCH_CLASSPATH="${SPARK_HOME}/launcher/target/scala-$SPARK_SCALA_VERSION/classes:$LAUNCH_CLASSPATH"
RUNNER="${JAVA_HOME}/bin/java"
#上面初始化一些运行环境,下面就是主要的运行代码
"$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$@"

进入 start-slaves.sh ,看到直接跳转运行"${SPARK_HOME}/sbin/slaves.sh"

#!/usr/bin/env bash 仅保留关键代码
# Launch the slaves
"${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; 
"${SPARK_HOME}/sbin/start-slave.sh" 
"spark://$SPARK_MASTER_HOST:$SPARK_MASTER_PORT"

来关注"${SPARK_HOME}/sbin/slaves.sh"内容:

#!/usr/bin/env bash 仅保留关键代码
HOSTLIST=`cat "${SPARK_CONF_DIR}/slaves"  #可以看到,此处即将读取slaves里配置的worker
# By default disable strict host key checking
SPARK_SSH_OPTS="-o StrictHostKeyChecking=no"
for slave in `echo "$HOSTLIST"|sed  "s/#.*$//;/^$/d"`; do #遍历slaves里的内容
    ssh $SPARK_SSH_OPTS "$slave" $"${@// /\\ }" 2>&1 | sed "s/^/$slave: /" 
done
wait

可以看到通过ssh远程至各个slave(已配置免密登录),然后运行上述位于参数处的代码,这样就做到了在master上启动各个worker节点自身的 start-slave.sh

cd "${SPARK_HOME}" \; "${SPARK_HOME}/sbin/start-slave.sh" "spark://$SPARK_MASTER_HOST:$SPARK_MASTER_PORT"
#!/usr/bin/env bash #在各worker节点上的start-slave.sh
CLASS="org.apache.spark.deploy.worker.Worker"  #此处与上述Master启动脚本处类似
MASTER=$1 #MASTER=spark://$SPARK_MASTER_HOST:$SPARK_MASTER_PORT
#SPARK_WORKER_INSTANCES  The number of worker instances to run on this slave.  Default is 1.
if [ "$SPARK_WORKER_INSTANCES" = "" ]; then
  start_instance 1 "$@"
else
  for ((i=0; i<$SPARK_WORKER_INSTANCES; i++)); do
    start_instance $(( 1 + $i )) "$@" #一个slave可以启动多个worker
  done
fi
function start_instance {
  WORKER_NUM=$1
  #参数WEBUI_PORT PORT_FLAG PORT_NUM 均会按照相应规则生成,MASTER即为传入的
  # spark://$SPARK_MASTER_HOST:$SPARK_MASTER_PORT
"${SPARK_HOME}/sbin"/spark-daemon.sh start $CLASS $WORKER_NUM \
     --webui-port "$WEBUI_PORT" $PORT_FLAG $PORT_NUM $MASTER "$@"
}

可以看到最终仍是调用 spark-daemon.sh ,参数还是start,与启动Master一致,仅传入的CLASS不同,最终扔运行了java -Xmx128m -cp $PATH org.apache.spark.launcher.Main "$@"

org.apache.spark.launcher.Main 主类

需要注意的是,org.apache.spark.launcher.Main主类源码并不在Spark-Core包中的launcher,而是单独在spark.launcher包中。下面是Main主类的注释,比较简单清晰,作为Spark启动的命令行入口,对submit和class两种模式会做出相应的判断。

package org.apache.spark.launcher;
/**
 * Command line interface for the Spark launcher. Used internally by Spark scripts.
 */
class Main {
  /**
   * Usage: Main [class] [class args]
   * <p>
   * This CLI works in two different modes:两种模式
   * <ul>
   *   <li>"spark-submit": if <i>class</i> is "org.apache.spark.deploy.SparkSubmit", the
   *   {@link SparkLauncher} class is used to launch a Spark application.</li>
   *   <li>"spark-class": if another class is provided, an internal Spark class is run.</li>
   * </ul>
   *
   * This class works in tandem with the "bin/spark-class" script on Unix-like systems
   */
   public static void main(String[] argsArray) throws Exception {...}
   }

此后代码均为保留的部分主干源码,可能存在个别变量名来源不明,尽量在不影响阅读的前提下做取舍。例如下面main代码中的className在源码中有明确来源,但在主干阅读部分予以省略。

List<String> args = new ArrayList<>(Arrays.asList(argsArray));
String className = args.remove(0);
public static void main(String[] argsArray) throws Exception {
	...//此处省略了className等内容的来源,之后类似省略不再说明
	if (className.equals("org.apache.spark.deploy.SparkSubmit")) {
	//SparkSubmit在后续文章会再次提及,此处不展开
	}
	else {//根据脚本传入的CLASS为org.apache.spark.deploy.master.Master或者org.apache.spark.deploy.worker.Worker
      AbstractCommandBuilder builder = new SparkClassCommandBuilder(className, args);
      cmd = buildCommand(builder, env, printLaunchCommand);
      //buildCommand内部会调用builder自身的buildCommand方法
    }
}

SparkClassCommandBuilder就是对类名和传入参数的一个简单封装类,并提供重写的buildCommand方法,下面就对该方法进行查看:

  @Override
  public List<String> buildCommand(Map<String, String> env){
    List<String> javaOptsKeys = new ArrayList<>();
    String memKey = null; String extraClassPath = null;
    // Master, Worker, HistoryServer, ExternalShuffleService, MesosClusterDispatcher use
    // SPARK_DAEMON_JAVA_OPTS (and specific opts) + SPARK_DAEMON_MEMORY.
    switch (className) {//这里有很多分类,均是完成变量的初始化工作,根据属性名称会去配置文件读取默认值,我们暂时仅关注master和worker
      case "org.apache.spark.deploy.master.Master":
        javaOptsKeys.add("SPARK_DAEMON_JAVA_OPTS");
        javaOptsKeys.add("SPARK_MASTER_OPTS");
        extraClassPath = getenv("SPARK_DAEMON_CLASSPATH");
        memKey = "SPARK_DAEMON_MEMORY";
        break;
      case "org.apache.spark.deploy.worker.Worker":
        javaOptsKeys.add("SPARK_DAEMON_JAVA_OPTS");
        javaOptsKeys.add("SPARK_WORKER_OPTS");
        extraClassPath = getenv("SPARK_DAEMON_CLASSPATH");
        memKey = "SPARK_DAEMON_MEMORY";
        break;
      case "org.apache.spark.deploy.history.HistoryServer":break;
      case "org.apache.spark.executor.CoarseGrainedExecutorBackend":break;
      case "org.apache.spark.executor.MesosExecutorBackend":break;
      case "org.apache.spark.deploy.mesos.MesosClusterDispatcher":break;
      case "org.apache.spark.deploy.ExternalShuffleService":
      case "org.apache.spark.deploy.mesos.MesosExternalShuffleService":break;
      default:memKey = "SPARK_DRIVER_MEMORY";break;
    }
    //将上面的字符串格式化成jvm相关的格式
    List<String> cmd = buildJavaCommand(extraClassPath);
    for (String key : javaOptsKeys) {
      String envValue = System.getenv(key);
      addOptionString(cmd, envValue);
    }
    String mem = firstNonEmpty(memKey != null ? System.getenv(memKey) : null, DEFAULT_MEM);
    cmd.add("-Xmx" + mem);
    cmd.add(className);
    cmd.addAll(classArgs);
    return cmd;
  }

buildCommand方法仍然是对参数的一些封装工作,继续回到main方法中:

public static void main(String[] argsArray) throws Exception {
	List<String> bashCmd = prepareBashCommand(cmd, env);
	for (String c : bashCmd) {
   	 	System.out.print(c);
   	 	System.out.print('\0');
	}
}
/*Prepare the command for execution from a bash script. The final command will have commands to set up any needed environment variables needed by the child process.*/
 private static List<String> prepareBashCommand(List<String> cmd, Map<String, String> childEnv) {}

从注释来看,prepareBashCommand也是对参数的一些修饰,目的是符合bash脚本的格式。
最关键的反而是System.out.print(c); System.out.print('\0');,此处将所有格式化完成的参数循环输出返回,并以空格为间隔(符合bash命令)。
我们现在掉头看spark-class脚本:

# Turn off posix mode since it does not allow process substitution
#关闭posxi模式,允许进程替换
set +o posix
CMD=()
#将输出重定向至此,并以空格为分隔符,全部赋值给CMD
while IFS= read -d '' -r ARG; do 
  CMD+=("$ARG")
done < <(build_command "$@")#此处的build_command是入口
build_command() {
  "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$@"
  printf "%d\0" $?
}
COUNT=${#CMD[@]}
LAST=$((COUNT - 1))
#获得启动器的退出代码,用以判断是否存在问题
LAUNCHER_EXIT_CODE=${CMD[$LAST]}
CMD=("${CMD[@]:0:$LAST}")
#此处执行命令
exec "${CMD[@]}"

可以看到,绕了一大圈终于回到了最开始的地方,如果对上述步骤还有印象,应该能想象出此处的CMD大概是什么模样。
java -cp extraClassPath -Xmx DEFAULT_MEM className classArgs
具体细节我们就不必关注了,其中className就是我们最开始传入的org.apache.spark.deploy.master.Master 和 org.apache.spark.deploy.worker.Worker,因此,我们接下来就要前往这两个类中一探究竟。

参考:

Apache Spark 源码

猜你喜欢

转载自blog.csdn.net/u011372108/article/details/88956743
今日推荐