centos single spark deploy big data environment (two) - [installation] spark

1, upload and spark shelf package scala

scala official website to download: https://www.scala-lang.org/download/

spark official website to download: http://spark.apache.org/downloads.html

2, extract package, rename

#scala
tar -zxf scala-2.12.7.tgz
mv scala-2.12.7 scala

#spark
tar -zxf spark-2.3.2-bin-hadoop2.7.tgz 
mv spark-2.3.2-bin-hadoop2.7 spark

3, configuration and spark scala environment variables

vi / etc / profile
Shift + # G quickly locate the last row 
#scala 
Export SCALA_HOME = / Home / Scala 
Export the PATH = $ {SCALA_HOME} / bin: $ the PATH 
#spark 
Export SPARK_HOME = / Home / Spark / 
Export the PATH = $ {SPARK_HOME} / bin: $ PATH

4. Load environment variables

source /etc/profile

5, cd to spark inside the copied file conf

cp spark-env.sh.template spark-env.sh
cp slaves.template slaves

6, joining configuration

vi slaves 
joined localhost or server IP
we spark-env.sh

#java
Export the JAVA_HOME = / Home / JDK

#scala
Export SCALA_HOME = / Home / Scala

IP #Spark master node
export SPARK_MASTER_IP = server IP

port number of the master node #Spark
export SPARK_MASTER_PORT = 7077

7, turn off the firewall

[root @ localhost] # systemctl STOP firewalld.service // Turn off the firewall service 
[root @ localhost] # systemctl disable firewalld.service // disable the firewall service startup

8, into the spark / sbin directory to start spark

cd /home/spark/sbin
./start-all.sh

9, and then enter the browser: you spark installed IP: 8080, the following effects appear, the installation is successful

 

Guess you like

Origin www.cnblogs.com/yuyang81577/p/11413677.html