centos7 安装spark2.3.1 。安装scala2.12.6

1 .安装scala 

1 ). 官网下载scala 

> wget https://downloads.lightbend.com/scala/2.12.6/scala-2.12.6.tgz
2) 解压

>tar -zxvf scala-2.12.6.tgz 

3) 修改环境变量

>vim /etc/profile

export SCALA_HOME=/root/yao/scala-2.12.6
export PATH=$PATH:$SCALA_HOME/bin
> source /etc/profile

4) 测试scala是否安装成功

> scala  -version 

或者

>scala 

可以看到scala 命令输入行
scala> 

scala  安装成功

2 .安装spark 

1) 下载安装包

>wget https://archive.apache.org/dist/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.6.tgz
2)解压
>tar -xzvf spark-2.3.1-bin-hadoop2.6.tgz 

3)修改环境变量
>vim /etc/profile

export SPARK_HOME=/root/yao/spark-2.3.1-bin-hadoop2.6
export PATH=$PATH:$SPARK_HOME/bin
> source /etc/profile

4)修改配置文件

> cd /root/yao/spark-2.3.1-bin-hadoop2.6/conf

> cp spark-env.sh.template  spark-env.sh

>vim spark-env.sh
export JAVA_HOME=/root/yao/jdk1.8.0_131
export SCALA_HOME=/root/yao/scala-2.12.6
export HADOOP_HOME=/root/yao/hadoop-2.6.0-cdh5.9.3
export HADOOP_CONF_DIR=/root/yao/hadoop-2.6.0-cdh5.9.3/etc/hadoop
export SPARK_MASTER_IP=hadoop1
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_CORES=2


> cp slaves.template  slaves

hadoop1
hadoop2
hadoop3

5)将spark,scala 的目录分别传送到hadoop2, hadoop3  .各自修改 /etc/profile 

6) 启动spark 

.> ./sbin/start-all.sh 

>/bin/spark-shell.sh 

7) 查看  

 可以查看jps 进程是否启动

hadoop1中:

[root@hadoop1 ~]# jps
5681 Master
6178 SecondaryNameNode
6340 ResourceManager
6919 SparkSubmit
7529 Jps
5995 NameNode
5741 Worker
4958 MainGenericRunner
 

[root@hadoop2 ~]# jps
15360 MainGenericRunner
15585 DataNode
16564 Jps
15469 MainGenericRunner
15725 NodeManager
16029 Worker
15918 MainGenericRunner
 

[root@hadoop3 ~]# jps
15329 NodeManager
15594 Worker
15196 DataNode
16125 Jps
 

8)浏览器查看

hadoop1:8080

hadoop1:4040

参考:https://blog.csdn.net/zhangvalue/article/details/80653313


 

猜你喜欢

转载自blog.csdn.net/qq_33124081/article/details/82219910