Spark2.x High Availability Configuration Based on Zookeeper

  

  Based on the previous article: Spark2.x installation configuration http://liumangafei.iteye.com/blog/2322672

 

  1. Modify spark-env.sh

 

export SCALA_HOME = / usr / scala / scala-2.11.8
export JAVA_HOME=/usr/java/jdk1.8.0_91
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop2:2181,hadoop3:2181,hadoop4:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/usr/hadoop/hadoop-2.6.4/etc/hadoop

 

  2. Start all services

 

  

sbin/start-all.sh // will start the current master and the corresponding worker

 

  Start master on another machine

 

/sbin/start-master.sh

 

  3. Test for high availability

 

  Check the 8080 ports of the corresponding two masters to see if the operation is successful, one alive and one standby

  Close the master of the alive, wait for dozens of seconds (depressed delayed synchronization), you will see that the standby becomes alive

 

 

sbin/start-master.sh // start master
sbin/stop-master.sh // close the master
 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327045549&siteId=291194637