Spark-On-Yarn built in Spark environment

Article Directory

principle

Insert picture description here
note:

  • In actual development, big data tasks are managed by unified resource management and task scheduling tools! —Yarn is the most used.
  • Because it is mature and stable, it supports multiple scheduling strategies: FIFO/Capcity/Fair
  • You can use Yarn scheduling to manage MR/Hive/Spark/Flink

installation

  1. Shut down the previous Spark-Standalone cluster
    /export/server/spark/sbin/stop-all.sh

  2. Configure Yarn history server and turn off resource check
    vim /export/servers/hadoop/etc/hadoop/yarn-site.xml

<configuration>
    <!-- 配置yarn主节点的位置 -->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node01</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <!-- 设置yarn集群的内存分配方案 -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>20480</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>2.1</value>
    </property>
    <!-- 开启日志聚合功能 -->
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <!-- 设置聚合日志在hdfs上的保存时间 -->
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>
    <!-- 设置yarn历史服务器地址 -->
    <property>
        <name>yarn.log.server.url</name>
        <value>http://node01:19888/jobhistory/logs</value>
    </property>
    <!-- 关闭yarn内存检查 -->
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
</configuration>

Note: If there is no configuration before, now it needs to be distributed and restarted yarn

cd /export/servers/hadoop/etc/hadoop
scp -r yarn-site.xml root@node02:$PWD
scp -r yarn-site.xml root@node03:$PWD
关闭yarn:
/export/server/hadoop/sbin/stop-yarn.sh
打开yarn:
/export/server/hadoop/sbin/start-yarn.sh
  1. Configure the integration of Spark's history server and Yarn
  • Modify spark-defaults.conf
进入配置目录
cd /export/servers/spark/conf

修改配置文件名称
mv spark-defaults.conf.template spark-defaults.conf

vim spark-defaults.conf
添加内容:
spark.eventLog.enabled                  true
spark.eventLog.dir                      hdfs://node01:8020/sparklog/
spark.eventLog.compress                 true
spark.yarn.historyServer.address        node01:18080
  • Modify spark-env.sh
修改配置文件
vim /export/servers/spark/conf/spark-env.sh

增加如下内容:
## 配置spark历史日志存储地址
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://node01:8020/sparklog/ -Dspark.history.fs.cleaner.enabled=true"

注意:sparklog需要手动创建
hadoop fs -mkdir -p /sparklog
  • Modify log level
进入目录
cd /export/servers/spark/conf

修改日志属性配置文件名称
mv log4j.properties.template log4j.properties

改变日志级别
vim log4j.properties

修改内容如下:

Insert picture description here

  • Distribution-optional, if you only submit spark tasks to yarn on node1, then you don't need to distribute
cd /export/servers/spark/conf
scp -r spark-env.sh root@node02:$PWD
scp -r spark-env.sh root@node03:$PWD
scp -r spark-defaults.conf root@node02:$PWD
scp -r spark-defaults.conf root@node03:$PWD
scp -r log4j.properties root@node02:$PWD
scp -r log4j.properties root@node03:$PWD
  1. Configure the dependent Spark jar package
  • Create a directory for storing spark-related jar packages on HDFS
    hadoop fs -mkdir -p /spark/jars/

  • Upload all jar packages of $SPARK_HOME/jars to HDFS
    hadoop fs -put /export/servers/spark/jars/* /spark/jars/

  • Modify spark-defaults.conf on node01

vim /export/servers/spark/conf/spark-defaults.conf
添加内容:
spark.yarn.jars  hdfs://node01:8020/spark/jars/*

分发同步-可选
cd /export/servers/spark/conf
scp -r spark-defaults.conf root@node02:$PWD
scp -r spark-defaults.conf root@node03:$PWD
  1. Start service
  • To start HDFS and YARN services, execute the command
    start-dfs.sh
    start-yarn.sh
    or
    start-all.sh on node01

-Start the MRHistoryServer service, execute the command
mr-jobhistory-daemon.sh start historyserver on node01

  • Start the Spark HistoryServer service, and execute the command
    /export/servers/spark/sbin/start-history-server.sh on node01

  • MRHistoryServer service WEB UI page:
    http://node01:19888

  • Spark HistoryServer service WEB UI page:
    http://node01:18080/

Guess you like

Origin blog.csdn.net/zh2475855601/article/details/114919934