原理
操作
- 启动zk
- 修改配置
vim /export/server/spark/conf/spark-env.sh
注释:#SPARK_MASTER_HOST=node01
增加:
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181 -Dspark.deploy.zookeeper.dir=/spark-ha"
- 分发配置
cd /export/server/spark/conf
scp -r spark-env.sh root@node02:$PWD
scp -r spark-env.sh root@node03:$PWD
测试
- 启动zookeeper服务
zkServer.sh status
zkServer.sh stop
zkServer.sh start
-
node01上启动Spark集群执行
/export/server/spark/sbin/start-all.sh
-
在node02上再单独只起个master
/export/server/spark/sbin/start-master.sh
-
查看WebUI
http://node01:8080/
http://node02:8080/
-
模拟node01宕机
jps
kill -9 进程id
6.再次查看web-ui
http://node01:8080/
http://node02:8080/