大数据集群之spark2.4.0高可用安装配置

一、安装准备

下载地址:https://archive.apache.org/dist/spark/

官方文档:http://spark.apache.org/docs/latest/

二、解压安装

解压缩文件

cd /usr/local/hadoop
tar zxpf spark-2.4.0-bin-hadoop2.7.tgz

2、创建软链接

ln -s spark-2.4.0-bin-hadoop2.7 spark

三、修改配置文件

slaves

hadoop001
hadoop002
hadoop003
hadoop004
hadoop005

spark-env.sh

export JAVA_HOME=/usr/java/jdk1.8
export SCALA_HOME=/usr/local/hadoop/scala
export MYSQL_HOME=/usr/local/mysql
export CLASSPATH=.:/usr/java/jdk1.8/lib/dt.jar:/usr/java/jdk1.8/lib/tools.jar
export SPARK_HOME=/usr/local/hadoop/spark
export HADOOP_HOME=/usr/local/hadoop/hadoop
export HBASE_HOME=/usr/local/hadoop/hbase
export GEOMESA_HBASE_HOME=/usr/local/hadoop/geomesa-hbase
export ZOO_HOME=/usr/local/hadoop/zookeeper

export SPARK_WORKING_MEMORY=4096m
# export SPARK_MASTER_IP=hadoop1
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop001:2181,hadoop002:2181,hadoop003:2181 -Dspark.deploy.zookeeper.dir=/spark"
export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop/etc/hadoop/
export YARN_CONF_DIR=/usr/local/hadoop/hadoop/etc/hadoop/

metrics.properties

*.sink.csv.directory=/home/spark/tmp/csv/

spark-defaults.conf

spark.local.dir					 /home/spark/tmp

四、环境变量配置

编辑 /etc/profile 文件

vim /etc/profile

添加以下内容

export SPARK_HOME=/usr/local/hadoop/spark
export PATH=$PATH:$SPARK_HOME

五、启动spark

在master上执行

$SPARK_HOME/sbin/start-all.sh

在备用master上执行

$SPARK_HOME/sbin/start-master.sh

六、验证安装

http://hadoop001:8080/

http://hadoop002:8080/

猜你喜欢

转载自blog.csdn.net/qq262593421/article/details/106962783