Ten, scala, spark Cluster Setup

spark cluster structures:
1, to upload scala-2.10.6.tgz Master
2, extract the scala-2.10.6.tgz
. 3, configure the environment variables
Export SCALA_HOME = / mnt / Scala 2.10.6-
Export the PATH the PATH = $: $ SCALA_HOME / bin
. 4, are transmitted scala-2.10.6 / etc / profile to slave01, slave02
SCP -R & lt Scala 2.10.6-slave01 the root @: / mnt /
SCP / etc / Profile @ slave01 the root: / etc / Profile
. 5, Source / etc / profile and test input scala
6 disposed spark
upload spark-1.6.3-bin-hadoop2.6.tgz decompressed
into the /yangfengbing/spark-1.6.3-bin-hadoop2.6/conf
Music Videos spark-env.sh .template spark-env.sh
configuration file spark-env.sh:
explain: JAVA_HOME specify the Java installation directory;
SCALA_HOME designated Scala installation directory;
SPARK_MASTER_IP specify the IP address of the Master Spark cluster nodes;
SPARK_WORKER_MEMORY specifies Worker nodes can be assigned to the Executors The maximum memory size;
#HADOOP_CONF_DIR Hadoop cluster configuration file to specify the directory.

the JAVA_HOME = Export / mnt / jdk1.7.0_80
Export SCALA_HOME = / mnt / Scala 2.10.6-
Export SPARK_MASTER_IP = Master
Export SPARK_WORKER_MEMORY = 2G
Export the HADOOP_CONF_DIR = / mnt / Hadoop-2.6.5 / etc / Hadoop
. 7, enter / yangfengbing / spark-1.6.3-bin-hadoop2.6 / conf found slaves
Music Videos slaves.template slaves
configuration:
Master
slave01
slave02
. 8, the configuration environment variable spark
Export SPARK_HOME = / mnt / spark-1.6.3-bin-hadoop2.6
Export the PATH = $ the PATH: $ SPARK_HOME / bin: $ SPARK_HOME / sbin
. 9, transmission Spark, / etc / Profile
SCP -R & lt Spark-1.6.3-bin-hadoop2.6 slave01 the root @: / mnt /
SCP / etc / the root Profile slave01 @: / etc / Profile
Source / etc / Profile
10, start the cluster
1) start master host node
Run start-master.sh of spark can start three services

into the 192.168.200.200:8080 View






Guess you like

Origin www.cnblogs.com/yfb918/p/11510944.html