Spark 2.4 standalone 部署

1 安装 Spark

使用如下命令下载 Spark下载地址

wget http://mirrors.hust.edu.cn/apache/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz

解压 tgz 包:

tar zxvf spark-2.4.0-bin-hadoop2.7.tgz

修改环境变量并声明:

vim ~/.bashrc

export SPARK_HOME=$HOME/spark-2.4.0-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

source ~/.bashrc

打开配置文件:

cd $SPARK_HOME/conf
cp spark-env.sh.template spark-env.sh
vim spark-env.sh

添加变量:

export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64
export SCALA_HOME=/usr/java/scala-2.11.8
export SPARK_HOME=/root/spark-2.4.0-bin-hadoop2.7
export SPARK_MASTER_IP=** your ip **
export SPARK_EXECUTOR_MEMORY=1G

2 部署 Standalone Spark

启动 spark

cd $SPARK_HOME
./sbin/start-all.sh

检查是否启动成功(使用 spark 客户端):

./bin/spark-shell

输出以下表示成功:

2018-12-26 11:20:33 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://spark-2:4040
Spark context available as 'sc' (master = local[*], app id = local-1545841253565).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.0
      /_/
         
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

猜你喜欢

转载自www.cnblogs.com/zhance/p/10182764.html