Standalone independent cluster built in Spark environment

principle

Insert picture description here

operating

  1. Cluster planning
node01:master
ndoe02:worker/slave
  1. 配置slaves/workers
进入配置目录:
cd /export/servers/spark/conf
修改配置文件名称:
mv slaves.template slaves
vim slaves
添加以下内容:
node02
  1. Configure master
进入配置目录:
cd /export/servers/spark/conf

修改配置文件名称:
mv spark-env.sh.template spark-env.sh
修改配置文件:
vim spark-env.sh
增加如下内容:
## 设置JAVA安装目录
JAVA_HOME=$JAVA_HOME

## HADOOP软件配置文件目录,读取HDFS上文件和运行Spark在YARN集群时需要,先提前配上
HADOOP_CONF_DIR=/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
YARN_CONF_DIR=/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop

## 指定spark老大Master的IP和提交任务的通信端口
SPARK_MASTER_HOST=node01
SPARK_MASTER_PORT=7077

SPARK_MASTER_WEBUI_PORT=8080

SPARK_WORKER_CORES=1
SPARK_WORKER_MEMORY=1g
  1. distribution
将配置好的将 Spark 安装包分发给集群中其它机器,命令如下:
cd /export/servers/
scp -r spark root@node02:$PWD

test

1. Cluster start and stop

在主节点上启动spark集群:
/export/servers/spark/sbin/start-all.sh 

在主节点上停止spark集群:
/export/servers/spark/sbin/stop-all.sh

在主节点上单独启动和停止Master:
start-master.sh
stop-master.sh

在从节点上单独启动和停止Worker(Worker指的是slaves配置文件中的主机名):
start-slaves.sh
stop-slaves.sh

2.jps view process

node01:master
node02:worker

3.http://node01:8080/

Insert picture description here

4. Start spark-shell

/export/servers/spark/bin/spark-shell --master spark://node01:7077
Insert picture description here

5. Submit WordCount task

  • Upload test file to hdfs
    hdfs dfs -put /export/data/Spark/words.txt /wordcount/input/words.txt
  • The directory can be created if it does not exist
    hdfs dfs -mkdir -p /wordcount/input
  • Submit code
val textFile = sc.textFile("hdfs://node01:8020/wordcount/input/words.txt")
val counts = textFile.flatMap(_.split(" ")).map((_,1)).reduceByKey(_ + _)
counts.collect
counts.saveAsTextFile("hdfs://node01:8020/wordcount/output47")

6. View the results

http://node01:50070/explorer.html#/wordcount/output47
Insert picture description here

7. View spark task web-ui

http://node01:4040/jobs/

Summary:
spark: 4040 task running web-ui interface port
spark: 8080 spark cluster web-ui interface port
spark: 7077 spark communication port when submitting a task

hadoop: 50070 cluster web-ui interface port
hadoop: 8020/9000 (old version) file upload and download communication port

Guess you like

Origin blog.csdn.net/zh2475855601/article/details/114645131