Standalone-HA built in Spark environment

Article Directory

principle

Insert picture description here

operating

  1. Start zk
  2. Change setting
    vim /export/server/spark/conf/spark-env.sh

注释:#SPARK_MASTER_HOST=node01
增加:
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181 -Dspark.deploy.zookeeper.dir=/spark-ha"

  1. Distribution configuration
cd /export/server/spark/conf

scp -r spark-env.sh root@node02:$PWD

scp -r spark-env.sh root@node03:$PWD

test

  1. Start zookeeper service
zkServer.sh status

zkServer.sh stop

zkServer.sh start
  1. Start Spark cluster execution on node01
    /export/server/spark/sbin/start-all.sh

  2. Only start a master separately on node02
    /export/server/spark/sbin/start-master.sh

  3. View WebUI
    http://node01:8080/
    Insert picture description here
    http://node02:8080/
    Insert picture description here

  4. Simulate node01 downtime

jps

kill -9 进程id

6. View web-ui again
http://node01:8080/
Insert picture description here
http://node02:8080/
Insert picture description here

Guess you like

Origin blog.csdn.net/zh2475855601/article/details/114885419