Configure HA in Spark Standalone Mode

Configuration environment: node1, node2, node3
core is added in spark-env.sh (all three nodes are added):

export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=192.168.245.151:2181,192.168.245.163:2181,192.168.245.165:2181"

There is no location set on zookeeper, the default is /spark.
After that, it is operated on node1: start-all.sh Viewed through the web, node1 is the master node. The
problem comes, when I operate in node2: start-master.sh to start After a standby node, it was found that the startup failed. After checking the log, I found that it was because of the configuration in the spark-env.sh of node2: export SPARK_MASTER_IP=node1. There are two solutions here:
1. Just change node1 to node2 directly. , for the same reason, if you want node3 to be a standby node, you can do the same configuration.
2. Comment out this configuration directly, without specifying the master, you can be the master yourself

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325164716&siteId=291194637