Spark2.x基于Zookeeper的高可用配置

  

  基于前一篇:Spark2.x安装配置http://liumangafei.iteye.com/blog/2322672

  1、修改spark-env.sh

export SCALA_HOME=/usr/scala/scala-2.11.8
export JAVA_HOME=/usr/java/jdk1.8.0_91
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop2:2181,hadoop3:2181,hadoop4:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/usr/hadoop/hadoop-2.6.4/etc/hadoop

  2、启动所有服务

  

sbin/start-all.sh  // 会启动当前的master和对应的worker

  启动另一台机器上的master

/sbin/start-master.sh

  3、测试是否高可用

  查看对应的两个master的8080端口,看是否运行成功,一个alive、一个standby

  关闭alive的master,等待几十秒(郁闷的延迟同步)会看到standby变为alive

sbin/start-master.sh  // 启动master
sbin/stop-master.sh   // 关闭master
 

猜你喜欢

转载自liumangafei.iteye.com/blog/2323564