Spark的架构概述(章节一)

Spark的架构概述(章节一)

背景介绍

Spark是一个快如闪电的统一分析引擎(计算框架)用于大规模数据集的处理。Spark在做数据的批处理计算,计算性能大约是Hadoop MapReduce的10~100倍,因为Spark使用比较先进的基于DAG 任务调度(有向无环计算),可以将一个任务拆分成若干个阶段,然后将这些阶段分批次交给集群计算节点处理。
在这里插入图片描述

mapreduce计算分为两步,map阶段和reduce阶段,如果两步处理不了结果,则需要再次进行mapreduce计算,反复从磁盘上读写数据,从而降低效率。而spark是基于内存的计算,每次计算分为若干个阶段,从磁盘中读取一次数据后,直接在内存中完成计算,最后将结果存入磁盘中。

MapReduce VS Spark

MapReduce作为第⼀代⼤数据处理框架,在设计初期只是为了满⾜基于海量数据级的海量数据计算的迫切需求。2006年剥离⾃Nutch(Java搜索引擎)⼯程,主要解决的是早期⼈们对⼤数据的初级认知所⾯临的问题。

随着时间的推移,⼈们开始探索使⽤Map Reduce计算框架完成⼀些复杂的⾼阶算法,往往这些算法通常不能通过1次性的Map Reduce迭代计算完成。由于Map Reduce计算模型总是把结果存储到磁盘中,每次迭代都需要将数据磁盘加载到内存,这就为后续的迭代带来了更多延⻓。

Spark发展如此之快是因为Spark在计算层⽅⾯明显优于Hadoop的Map Reduce这磁盘迭代计算,因为Spark可以使⽤内存对数据做计算,⽽且计算的中间结果也可以缓存在内存中,这就为后续的迭代计算节省了时间,⼤幅度的提升了针对于海量数据的计算效率。

不仅如此Spark在设计理念中也提出了 One stack ruled them all 战略,并且提供了基于Spark批处理⾄上的计算服务分⽀例如:实现基于Spark的交互查询、近实时流处理、机器学习、Grahx 图形关系存储等。

在这里插入图片描述

计算流程(重点)

首先我们回顾mapreduce的缺点

1)MapReduce虽然基于⽮量编程思想,但是计算状态过于简单,只是简单的将任务分为Map state和Reduce State,没有考虑到迭代计算场景。

2)在Map任务计算的中间结果存储到本地磁盘,IO调⽤过多,数据读写效率差。

3)MapReduce是先提交任务,然后在计算过程中申请资源。并且计算⽅式过于笨重。每个并⾏度都是由⼀个JVM进程来实现计算。

spark的计算流程
在这里插入图片描述

相⽐较于MapReduce计算,Spark计算有以下优点:

1)智能DAG任务拆分,将⼀个复杂计算拆分成若⼲个State,满⾜迭代计算场景
2)Spark提供了计算的缓冲和容错策略,将计算结果存储在内存或者磁盘,加速每个state的运
⾏,提升运⾏效率
3)Spark在计算初期,就已经申请好计算资源。任务并⾏度是通过在Executor进程中启动线程实
现,相⽐较于MapReduce计算更加轻快。

提示 ⽬前Spark提供了Cluster Manager的实现由Yarn、Standalone、Messso、kubernates等实现。其中企
业常⽤的有Yarn和Standalone⽅式的管理。

Spark的安装

Spark On Yarn

Hadoop环境

设置CentOS进程数和⽂件数(可选)

[root@CentOS ~]# vi /etc/security/limits.conf
* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800

优化linux性能,修改这个最⼤值,重启CentOS⽣效

配置主机名(重启⽣效)

[root@CentOS ~]# vi /etc/hostname
CentOS
[root@CentOS ~]# reboot

设置IP映射

[root@CentOS ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.52.134 CentOS

防⽕墙服务

# 临时关闭服务
[root@CentOS ~]# systemctl stop firewalld
[root@CentOS ~]# firewall-cmd --state
not running
# 关闭开机⾃动启动
[root@CentOS ~]# systemctl disable firewalld

安装JDK1.8+

[root@CentOS ~]# rpm -ivh jdk-8u171-linux-x64.rpm
[root@CentOS ~]# ls -l /usr/java/
total 4
lrwxrwxrwx. 1 root root 16 Mar 26 00:56 default -> /usr/java/latest
drwxr-xr-x. 9 root root 4096 Mar 26 00:56 jdk1.8.0_171-amd64
lrwxrwxrwx. 1 root root 28 Mar 26 00:56 latest -> /usr/java/jdk1.8.0_171-amd64
[root@CentOS ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
[root@CentOS ~]# source ~/.bashrc

SSH配置免密

[root@CentOS ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4b:29:93:1c:7f:06:93:67:fc:c5:ed:27:9b:83:26:c0 root@CentOS
The key's randomart image is:
+--[ RSA 2048]----+
| |
| o . . |
| . + + o .|
| . = * . . . |
| = E o . . o|
| + = . +.|
| . . o + |
| o . |
| |
+-----------------+
[root@CentOS ~]# ssh-copy-id CentOS
The authenticity of host 'centos (192.168.40.128)' can't be established.
RSA key fingerprint is 3f:86:41:46:f2:05:33:31:5d:b6:11:45:9c:64:12:8e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos,192.168.40.128' (RSA) to the list of known hosts.
root@centos's password:
Now try logging into the machine, with "ssh 'CentOS'", and check in:
 .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root@CentOS ~]# ssh root@CentOS
Last login: Tue Mar 26 01:03:52 2019 from 192.168.40.1
[root@CentOS ~]# exit
logout
Connection to CentOS closed.

配置HDFS|YARN

将 hadoop-2.9.2.tar.gz 解压到系统的 /usr ⽬录下然后配置[core|hdfs|yarn|mapred]-site.xml配置⽂件。

[root@CentOS ~]# vi /usr/soft/hadoop-2.9.2/etc/hadoop/core-site.xml
<!--nn访问⼊⼝-->
<property>
 <name>fs.defaultFS</name>
 <value>hdfs://CentOS:9000</value>
</property>
<!--hdfs⼯作基础⽬录-->
<property>
 <name>hadoop.tmp.dir</name>
 <value>/usr/hadoop-2.9.2/hadoop-${user.name}</value>
</property>
[root@CentOS ~]# vi /usr/soft/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<!--block副本因⼦-->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--配置Sencondary namenode所在物理主机-->
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>CentOS:50090</value>
</property>
<!--设置datanode最⼤⽂件操作数-->
<property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
</property>
<!--设置datanode并⾏处理能⼒-->
<property>
    <name>dfs.datanode.handler.count</name>
    <value>6</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/yarn-site.xml
<!--配置MapReduce计算框架的核⼼实现Shuffle-洗牌-->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<!--配置资源管理器所在的⽬标主机-->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>CentOS</value>
</property>
<!--关闭物理内存检查-->
<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>
<!--关闭虚拟内存检查-->
<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>
[root@CentOS ~]# vi /usr/hadoop-2.9.2/etc/hadoop/mapred-site.xml
<!--MapRedcue框架资源管理器的实现-->
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

配置hadoop环境变量

[root@CentOS ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/usr/hadoop-2.9.2
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
CLASSPATH=.
export JAVA_HOME
export CLASSPATH
export PATH
export HADOOP_HOME
[root@CentOS ~]# source .bashrc

启动Hadoop服务

[root@CentOS ~]# hdfs namenode -format # 创建初始化所需的fsimage⽂件
[root@CentOS ~]# start-dfs.sh
[root@CentOS ~]# start-yarn.sh
[root@CentOS ~]# jps
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
123036 Jps

访问:http://CentOS:8088以及 http://centos:50070/

Spark环境(使用yarn)

下载 spark-2.4.5-bin-without-hadoop.tgz 解压到 /usr ⽬录,并且将Spark⽬录修改名字为 spark-2.4.5 然后修改 spark-env.sh 和 spark-default.conf ⽂件.

解压安装spark

[root@CentOS ~]# tar -zxf spark-2.4.5-bin-without-hadoop.tgz -C /usr/
[root@CentOS ~]# mv /usr/spark-2.4.5-bin-without-hadoop/ /usr/spark-2.4.5
[root@CentOS ~]# tree -L 1 /usr/spark-2.4.5/
/usr/spark-2.4.5/
!"" bin # Spark系统执⾏脚本
!"" conf # Spar配置⽬录
!"" data
!"" examples # Spark提供的官⽅案例
!"" jars
!"" kubernetes
!"" LICENSE
!"" licenses
!"" NOTICE
!"" python
!"" R
!"" README.md
!"" RELEASE
!"" sbin # Spark⽤户执⾏脚本
#"" yarn

配置Spark服务

[root@CentOS ~]# cd /usr/spark-2.4.5/
[root@CentOS spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@CentOS spark-2.4.5]# vi conf/spark-env.sh
# Options read in YARN client/cluster mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
HADOOP_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES=2
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=1G
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
[root@CentOS spark-2.4.5]# mv conf/spark-defaults.conf.template conf/sparkdefaults.conf
[root@CentOS spark-2.4.5]# vi conf/spark-defaults.conf
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs

需要现在在HDFS上创建 spark-logs ⽬录,⽤于作为Sparkhistory服务器存储历史计算数据的地⽅。

[root@CentOS ~]# hdfs dfs -mkdir /spark-logs

启动Spark history server

[root@CentOS spark-2.4.5]# ./sbin/start-history-server.sh
[root@CentOS spark-2.4.5]# jps
124528 HistoryServer
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
124574 Jps

访问 http://主机ip:18080 访问Spark History Server

测试环境

./bin/spark-submit  --master yarn  --deploy-mode client --class org.apache.spark.examples.SparkPi  --num-executors 2  --executor-cores 3  /usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

得到结果

19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID0) in 6609 ms on CentOS (executor 1) (1/2)19/04/21 03:30:39 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID1) in 6403 ms on CentOS (executor 1) (2/2)19/04/21 03:30:39 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks haveall completed, from pool
19/04/21 03:30:39 INFO scheduler.DAGScheduler: ResultStage 0 (reduce atSparkPi.scala:38) finished in 29.116 s
19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 30.317103 s
`Pi is roughly 3.141915709578548`
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped Spark@41035930{
    
    HTTP/1.1,
[http/1.1]}{
    
    0.0.0.0:4040}
19/04/21 03:30:40 INFO ui.SparkUI: Stopped Spark web UI at http://CentOS:4040
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
19/04/21 03:30:40 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
参数 说明
–master 链接的资源服务器的名字 yarn
–deploy-mode 部署模式,可选值有 client 和 cluster ,决定Driver程序是否在远程执⾏
–class 运⾏的主类名字
–num-executors 计算过程所需要的进程数
–executor-cores 每个Exector最多使⽤的CPU的核数

Spark Shell

./bin/spark-shell --master yarn --deploy-mode client --executor-cores 4 --num-executors 3

Spark Standalone

使用spark自己的计算模型,不再使用yarn。

与上面的配置基本一致,只需更改 spark-env.sh

vim conf/spark-env.sh
# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors(e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service(e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
SPARK_MASTER_HOST=centos
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4
SPARK_WORKER_INSTANCES=2
SPARK_WORKER_MEMORY=2g
export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export SPARK_WORKER_INSTANCES
export LD_LIBRARY_PATH=/usr/soft/hadoop-2.9.2/lib/native
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"

删除之前的日志目录,重新创建

hdfs dfs -rm -r -f /spark-logs
hdfs dfs -mkdir /spark-logs

启动 history-server

[root@CentOS spark-2.4.5]# ./sbin/start-history-server.sh
[root@CentOS spark-2.4.5]# jps
124528 HistoryServer
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
124574 Jps

访问 http://主机ip:18080 访问Spark History Server

启动Spark⾃⼰计算服务

[root@CentOS spark-2.4.5]# ./sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/spark2.4.5/logs/spark-root-org.apache.spark.deploy.master.Master-1-CentOS.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-CentOS.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-2-CentOS.out
[root@CentOS spark-2.4.5]# jps
7908 Worker
7525 HistoryServer
8165 Jps
122374 SecondaryNameNode
7751 Master
122201 DataNode
122058 NameNode
7854 Worker

注意:./sbin/start-all.sh 需要在 vim .bashrc 中配置用户变量,否则会出现 hadoop :未找到命令错误,同时出现 JAVA_HOME not set 错误。

只配置系统变量无法解决上述问题,必须配置 .bashrc 文件,配置完成后记得 source .bashrc

⽤户可以访问http://CentOS:8080

测试 standalone 的环境 (写在同一行)

./bin/spark-submit  --master spark://centos:7077  --deploy-mode client --class org.apache.spark.examples.SparkPi  --total-executor-cores 6  /usr/soft/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

结果中出现 pi 的值即为成功

`Pi is roughly 3.141915709578548`

属性说明

参数 说明
–master 链接的资源服务器的名字 spark://CentOS:7077
–deploy-mode 部署模式,可选值有 client 和 cluster ,决定Driver程序是否在远程执⾏
–class 运⾏的主类名字
–total-executor-cores 计算过程所需要的计算资源 线程数

Spark Shell

[root@CentOS spark-2.4.5]# ./bin/spark-shell --master spark://centos:7077 --total-executor-cores 6
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = spark://CentOS:7077, app id = app20200207140419-0003).
Spark session available as 'spark'.
Welcome to
 ____ __
 / __/__ ___ _____/ /__
 _\ \/ _ \/ _ `/ __/ '_/
 /___/ .__/\_,_/_/ /_/\_\ version 2.4.5
 /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_231)
Type in expressions to have them evaluated.
Type :help for more informat.
scala>sc.textFile("hdfs:///demo/words").flatMap(_.split("")).map((_,1)).reduceByKey(_+_).sortBy(_._2,true).saveAsTextFile("hdfs:///demo/results")

完成一个WordCount

自定义一个文件,文件中写入任意数量和内容的单词,再将文件上传至hdfs上。

[root@centos hadoop-2.9.2]# vi t_word
[root@centos hadoop-2.9.2]# hdfs dfs -mkdir -p /demp/words
[root@centos hadoop-2.9.2]# hdfs dfs -put t_word /demp/words
scala> sc.textFile("hdfs:///demp/words").flatMap(_.split(" ")).map(t=>(t,1)).reduceByKey(_+_).sortBy(_._2,false).collect()

结果

res8: Array[(String, Int)] = Array((day,2), (good,2), (up,1), (a,1), (on,1), (demo,1), (study,1), (this,1), (is,1), (come,1), (baby,1))
.saveAsTextFile("hdfs:///demo/results")     #将文件输出到hdfs文件系统中
```shell
[root@centos hadoop-2.9.2]# vi t_word
[root@centos hadoop-2.9.2]# hdfs dfs -mkdir -p /demp/words
[root@centos hadoop-2.9.2]# hdfs dfs -put t_word /demp/words
scala> sc.textFile("hdfs:///demp/words").flatMap(_.split(" ")).map(t=>(t,1)).reduceByKey(_+_).sortBy(_._2,false).collect()

结果

res8: Array[(String, Int)] = Array((day,2), (good,2), (up,1), (a,1), (on,1), (demo,1), (study,1), (this,1), (is,1), (come,1), (baby,1))
.saveAsTextFile("hdfs:///demo/results")     #将文件输出到hdfs文件系统中

猜你喜欢

转载自blog.csdn.net/origin_cx/article/details/104365791