Standalone mode
1. Mode overview
Construction of a the Master+Slave
running cluster Spark, Spark formed in the cluster.
Two, installation and use
(1) Enter the conf folder under the spark installation directory
[atguigu@hadoop102 module]$ cd spark/conf/
(2) Modify the configuration file name
[atguigu@hadoop102 conf]$ mv slaves.template slaves
[atguigu@hadoop102 conf]$ mv spark-env.sh.template spark-env.sh
(3) Modify the slave file and add the work node:
[atguigu@hadoop102 conf]$ vim slaves
hadoop102
hadoop103
hadoop104
(4) Modify the spark-env.sh file and add the following configuration:
[atguigu@hadoop102 conf]$ vim spark-env.sh
SPARK_MASTER_HOST=hadoop101
SPARK_MASTER_PORT=7077
(5) Distribute spark package
[atguigu@hadoop102 module]$ xsync spark/
(6) Start
[atguigu@hadoop102 spark]$ sbin/start-all.sh
[atguigu@hadoop102 spark]$ util.sh
================atguigu@hadoop102================
3330 Jps
3238 Worker
3163 Master
================atguigu@hadoop103================
2966 Jps
2908 Worker
================atguigu@hadoop104================
2978 Worker
3036 Jps
Web page view:hadoop102:8080
Note: If you encounter "
JAVA_HOME not set
" exception, you can add the following configuration to the spark-config.sh file in the sbin directory:export JAVA_HOME=XXXX
(JAVA environment variable)
(7) Official request for PI case
[atguigu@hadoop102 spark]$ bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://hadoop102:7077 \
--executor-memory 1G \
--total-executor-cores 2 \
./examples/jars/spark-examples_2.11-2.1.1.jar \
100
(8) Start the spark shell
/opt/module/spark/bin/spark-shell \
--master spark://hadoop101:7077 \
--executor-memory 1g \
--total-executor-cores 2
参数解析:--master spark://hadoop102:7077 指定要连接的集群的 master
Execute the WordCount program:
scala>sc.textFile("input").flatMap(_.split("
")).map((_,1)).reduceByKey(_+_).collect
res0: Array[(String, Int)] = Array((hadoop,6), (oozie,3), (spark,3),
(hive,3), (atguigu,3), (hbase,6))
Three, JobHistoryServer configuration
(1) Modify the spark-default.conf.template name
[atguigu@hadoop102 conf]$ mv spark-defaults.conf.template sparkdefaults.conf
(2) Modify the spark-default.conf file and enable Log:
[atguigu@hadoop102 conf]$ vi spark-defaults.conf
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hadoop102:9000/directory
Note:
HDFS 上的目录需要提前存在
.
(3) Modify the spark-env.sh file and add the following configuration:
[atguigu@hadoop102 conf]$ vi spark-env.sh
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080
-Dspark.history.retainedApplications=30
-Dspark.history.fs.logDirectory=hdfs://hadoop101:9000/directory"
Parameter Description:
spark.eventLog.dir:Application 在运行过程中所有的信息均记录在该属性指定的路径下
spark.history.ui.port=18080 WEBUI 访问的端口号为 18080
spark.history.fs.logDirectory=hdfs://hadoop102:9000/directory 配置了该属性后,在 starthistory-server.sh 时就无需再显式的指定路径,Spark History Server 页面只展示该指定路径下的信息
spark.history.retainedApplications=30 指定保存 Application 历史记录的个数,如果超过
这个值,旧的应用程序信息将被删除,这个是内存中的应用数,而不是页面上显示的应用数。
(4) Distribution configuration file
[atguigu@hadoop102 conf]$ xsync spark-defaults.conf
[atguigu@hadoop102 conf]$ xsync spark-env.sh
(5) Start history service
[atguigu@hadoop102 spark]$ sbin/start-history-server.sh
(6) Perform the task again
[atguigu@hadoop102 spark]$ bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://hadoop101:7077 \
--executor-memory 1G \
--total-executor-cores 2 \
./examples/jars/spark-examples_2.11-2.1.1.jar \
100
(7) View historical services:hadoop102:18080
Four, HA configuration
(1) First, ensure that zookeeper is installed and started normally
(2) Modify the spark-env.sh file and add the following configuration:
[atguigu@hadoop102 conf]$ vi spark-env.sh
注释掉如下内容:
#SPARK_MASTER_HOST=hadoop102
#SPARK_MASTER_PORT=7077
添加上如下内容:
export SPARK_DAEMON_JAVA_OPTS="
-Dspark.deploy.recoveryMode=ZOOKEEPER
-Dspark.deploy.zookeeper.url=hadoop101,hadoop102,hadoop103
-Dspark.deploy.zookeeper.dir=/spark"
(3) Distribution of configuration files
[atguigu@hadoop102 conf]$ xsync spark-env.sh
(4) Start all nodes on hadoop102
[atguigu@hadoop102 spark]$ sbin/start-all.sh
(5) Start the master node separately on hadoop103
[atguigu@hadoop103 spark]$ sbin/start-master.sh
(6) Spark HA cluster access
/opt/module/spark/bin/spark-shell \
--master spark://hadoop101:7077,hadoop102:7077 \
--executor-memory 2g \
--total-executor-cores 2