spark集群部署

版权声明:everything https://blog.csdn.net/wanbf123/article/details/82385207

1、下载spark,在这之前需要安装scala

wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz

 2、配置环境变量

vi /etc/profile
export SPARK_HOME=/usr/local/spark-2.2.0

export PATH=$PATH:$SPARK_HOME/bin

3、修改spark-env

vim spark-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_141

export SCALA_HOME=/usr/scala-2.11.7

export HADOOP_HOME=/usr/local/hadoop-2.7.2

export HADOOP_CONF_DIR=/usr/local/hadoop-2.7.2/etc/hadoop

export SPARK_MASTER_IP=SparkMaster

export SPARK_WORKER_MEMORY=4g

export SPARK_WORKER_CORES=2

export SPARK_WORKER_INSTANCES=1

 变量说明 
- JAVA_HOME:Java安装目录 
- SCALA_HOME:Scala安装目录 
- HADOOP_HOME:hadoop安装目录 
- HADOOP_CONF_DIR:hadoop集群的配置文件的目录 
- SPARK_MASTER_IP:spark集群的Master节点的ip地址 
- SPARK_WORKER_MEMORY:每个worker节点能够最大分配给exectors的内存大小 
- SPARK_WORKER_CORES:每个worker节点所占有的CPU核数目 
- SPARK_WORKER_INSTANCES:每台机器上开启的worker节点的数目

4、修改workers

vi conf/slaves
SparkWorker1
SparkWorker2

 5、启动

start-dfs.sh
start-all.sh

 

6、打开spark-shell,可以查看Web-ui的执行任务

spark-shell

 

猜你喜欢

转载自blog.csdn.net/wanbf123/article/details/82385207