Spark HA

原创转载请注明出处:http://agilestyle.iteye.com/blog/2294076

 

前期准备

zookeeper集群搭建完毕

Scala环境配置完毕

export JAVA_HOME=/home/hadoop/app/jdk1.8.0_77
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.4
export HIVE_HOME=/home/hadoop/app/apache-hive-1.2.1-bin
export HBASE_HOME=/home/hadoop/app/hbase-1.1.4
export STORM_HOME=/home/hadoop/app/apache-storm-1.0.0
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.8  
export SCALA_HOME=/home/hadoop/app/scala-2.11.8
export SPARK_HOME=/home/hadoop/app/spark-1.6.1-bin-hadoop2.6
export MVN_HOME=/home/hadoop/app/apache-maven-3.3.9
export PATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${HIVE_HOME}/bin:${HBASE_HOME}/bin:${STORM_HOME}/bin:${ZOOKEEPER_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${MVN_HOME}/bin

 下载解压spark-1.6.1-bin-hadoop2.6.tgz(http://spark.apache.org/downloads.html)

 Standby Masters with ZooKeeper

 

安装步骤

拷贝conf目录下的spark-env.sh.template

cp spark-env.sh.template spark-env.sh


 

修改spark-env.sh

vi spark-env.sh

 添加如下

export JAVA_HOME=/home/hadoop/app/jdk1.8.0_77
export SCALA_HOME=/home/hadoop/app/scala-2.11.8
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop-0000:2181,hadoop-0001:2181,hadoop-0002:2181 -Dspark.deploy.zookeeper.dir=/spark"


 

扫描二维码关注公众号,回复: 382781 查看本文章

拷贝conf目录下的slaves.template

cp slaves.template slaves

 

修改slaves

vi slaves
hadoop-0000
hadoop-0001
hadoop-0002


 

保存退出,scp到其他两个节点(hadoop-0000, hadoop-0001)

scp -r spark-1.6.1-bin-hadoop2.6/ hadoop-0000:/home/hadoop/app/
scp -r spark-1.6.1-bin-hadoop2.6/ hadoop-0001:/home/hadoop/app/

 

启动

首先启动zookeeper集群

zkServer.sh start 

查看状态,一个leader,两个follower

 

接着cd到spark的sbin目录下

./start-all.sh



jps分别查看3台节点上的状态

hadoop-0000


 
hadoop-0001


 
hadoop-0002


 

然后,为了确保HA,我们在hadoop-0001上再启动一个master


 

最后通过http://hadoop-0000:8080访问Master的Spark Web UI,状态为Alive



 

通过http://hadoop-0001:8080访问另一个Master的Spark Web UI,状态为STANDBY



 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

猜你喜欢

转载自agilestyle.iteye.com/blog/2294076