配置yarn集群

yarn源配置:

单台yarn源配置,使用资源较少,速度快

core.site.xml文件:

<configuration>
<!-- 指定hadoop的hdfs的namenode的访问路径. -->
     <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop01:9000</value>
    </property>
<!-- 指定hadoop临时目录的路径  这个路径是放元数据的.必须要先创建 -->
     <property>
        <name>hadoop.tmp.dir</name>
        <value>/root/hpdata</value>
     </property>
</configuration>

hdfs.site.xml文件:

<configuration>
<!-- 配置ScondaryNameNode的位置 -->
    <property>
      <name>dfs.namenode.secondary.http-address</name> 配置SN的主机名
      <value>hadoop04:50090</value>
    </property>
<!-- 配置ScondaryNameNode的位置 -->
    <property>
      <name>dfs.namenode.secondary.https-address</name>
      <value>hadoop04:50091</value>
    </property>
</configuration>

yarn.site.xml文件:
<configuration>
<!-- 指定集群中最大的管理者 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
<!-- 指定集群中管理者的版本 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

mapred.site.xml文件:
<configuration>
<!-- 使用yar运行maperreduce. -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

启动zkServer.sh start(三台机器上面启动)

然后start-all.sh启动

如果有单独的节点没有启动,则使用yarn-daemon.sh start DataNode


高可用yarn集群:

core.site.xml文件:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://bx</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>CentOS8,CentOS9,CentOS10</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop</value>
    </property>
</configuration>

hdfs.site.xml文件:

<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>bx</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.bx</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.bx.nn1</name>
        <value>CentOS8:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.bx.nn2</name>
        <value>CentOS9:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.bx.nn1</name>
        <value>CentOS8:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.bx.nn2</name>
        <value>CentOS9:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://CentOS8:8485;CentOS10:8485;CentOS9:8485/abc</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.bx</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/journalnode/</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
</configuration>

yarn.site.xml文件:

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
<property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value>cluster1</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value>CentOS8</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value>CentOS9</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value>CentOS8,CentOS9,CentOS10</value>
 </property>
</configuration>

配置mapred-site.xml文件:

<configuration>
<!-- 使用yar运行maperreduce. -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

然后首先启动所有节点上面的zookeeper
命令:zkServer.sh start
然后在namenode节点上面启动yarn集群
start-all.sh

猜你喜欢

转载自blog.csdn.net/zhangfengbx/article/details/78575636
今日推荐