principle
operating
- Start zk
- Change setting
vim /export/server/spark/conf/spark-env.sh
注释:#SPARK_MASTER_HOST=node01
增加:
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181 -Dspark.deploy.zookeeper.dir=/spark-ha"
- Distribution configuration
cd /export/server/spark/conf
scp -r spark-env.sh root@node02:$PWD
scp -r spark-env.sh root@node03:$PWD
test
- Start zookeeper service
zkServer.sh status
zkServer.sh stop
zkServer.sh start
-
Start Spark cluster execution on node01
/export/server/spark/sbin/start-all.sh
-
Only start a master separately on node02
/export/server/spark/sbin/start-master.sh
-
View WebUI
http://node01:8080/
http://node02:8080/
-
Simulate node01 downtime
jps
kill -9 进程id
6. View web-ui again
http://node01:8080/
http://node02:8080/