Introduction: The Spark Standalone cluster is a cluster mode of the Master-Slaves architecture. Like most clusters of the Master-Slaves structure, there is a Master Single Point of Failure (SPOF) problem.
- StandaloneHA's model: The essence is based on ZK as a leader election
-Construction process: Just make some configuration file modifications based on the previous Standalone mode
- Configure on node01:
vim /export/server/spark/conf/spark-env.sh
注释或删除MASTER_HOST内容:
# SPARK_MASTER_HOST=node1
增加如下配置
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181 -Dspark.deploy.zookeeper.dir=/spark-ha"
参数含义说明:
spark.deploy.recoveryMode:恢复模式
spark.deploy.zookeeper.url:ZooKeeper的Server地址
spark.deploy.zookeeper.dir:保存集群元数据信息的文件、目录。包括Worker、Driver、Application信息。
- Distribute spark-env.sh to the cluster
cd /export/server/spark/conf
scp -r spark-env.sh root@node2:$PWD
scp -r spark-env.sh root@node3:$PWD
- Start cluster service
Start ZOOKEEPER service
zkServer.sh status
zkServer.sh stop
zkServer.sh start
node1上启动Spark集群执行
/export/server/spark/sbin/start-all.sh
在node2上再单独只起个master:
/export/server/spark/sbin/start-master.sh
查看WebUI
http://node1:8080/
http://node2:8080/
- Complete the setup
test:
- Use SparkShell interactive command line
bin/spark-shell --master spark://node1:7077,node2:7077
- wordcount test:
sc.textFile("hdfs://node1:8020/wordcount/input/words.txt").flatMap(x=>x.split("\\s+")).map(x=>(x,1)).reduceByKey((a,b)=>a+b).collect
- Pi test:
bin/spark-submit \
--master spark://node1:7077,node2:7077 \
--class org.apache.spark.examples.SparkPi \
/export/server/spark/examples/jars/spark-examples_2.11-2.4.5.jar \
10
- Verify HA mode:
Effect:
Note here that the official website says that it takes 1-2min to change from standby to Alive state