cent0S 7 安装 cdh5.13 笔记之四: 配置 hadoop

cent0S 7 安装 cdh5.13 笔记之四: 

配置 hadoop


本文先给出基本的HA的hadoop-hdfs配置,保证hadoop集群可以正确运行,然后在以后的文章中根据服务器物理资源的情况,仔细给出调优的参数和计算依据!

参考文献: http://blog.csdn.net/zuochanxiaoheshang/article/details/8014397

12 Hadoop 配置 (每个节点上执行)

在每个机器上创建配置文件目录 conf.hacl:

mkdir /etc/hadoop/conf.hacl/

包含的配置文件如下:

1) configuratioin.xsl
2) core-site.xml
3) hadoop-env.sh
4) slaves
5) hdfs-site.xml
6) mapred-site.xml
7) yarn-site.xml
8) yarn-env.sh
9) capacity-scheduler.xml

并在每台机器上执行:

alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.hacl 50
alternatives --set hadoop-conf /etc/hadoop/conf.hacl
alternatives --auto hadoop-conf

创建数据目录:

mkdir -p /data/hacl/dfs/tmp
chown -R hdfs:hdfs /data/hacl

12.1 configuration.xsl

没有内容需要更改!

<?xml version="1.0"?>
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="html"/>
<xsl:template match="configuration">
<html>
<body>
<table border="1">
<tr>
 <td>name</td>
 <td>value</td>
 <td>description</td>
</tr>
<xsl:for-each select="property">
<tr>
  <td><a name="{name}"><xsl:value-of select="name"/></a></td>
  <td><xsl:value-of select="value"/></td>
  <td><xsl:value-of select="description"/></td>
</tr>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>

12.2 core-site.xml

需要仔细配置!

<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hacl</value>
    </property>

    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
        <description>Bytes size of read/write buffer used in SequenceFiles.</description>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hacl/dfs/tmp</value>
        <description>chown -R hdfs:hdfs hadoop_tmp_dir</description>
    </property>

    <property>
        <name>hadoop.proxyuser.hduser.hosts</name>
        <value>*</value>
    </property>

    <property>
        <name>hadoop.proxyuser.hduser.groups</name>
        <value>*</value>
    </property>

    <!-- Configuring automatic failover -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>cent7-n1.pepstack.com:2181,cent7-n2.pepstack.com,cent7-n3.pepstack.com</value>
        <description>This lists the host-port pairs running the ZooKeeper service.</description>
    </property>

    <!-- TODO: Securing access to ZooKeeper -->

    <!-- TCP -->
    <property>
        <name>ipc.server.tcpnodelay</name>
        <value>true</value>
    </property>

    <property>
        <name>ipc.client.tcpnodelay</name>
        <value>true</value>
        <description>Turn on or off Nagle's algorithm for the TCP socket connection on
          the client. Setting to true disables the algorithm and may decrease latency
          with a cost of more or smaller packets.
        </description>
    </property>

</configuration>

12.3 hadoop-env.sh

需要仔细配置!

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
export JAVA_HOME=/usr/local/java/jdk1.8.0_152
export HADOOP_PREFIX=/usr/lib/hadoop
export HADOOP_CONF_DIR=/etc/hadoop/conf


# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""
HADOOP_NAMENODE_OPTS="-Xmx1000m"
HADOOP_DATANODE_OPTS=" -Xmx1000m"

HADOOP_SECONDARYNAMENODE_OPTS="-Xmx1000m"
HADOOP_BALANCER_OPTS="-Xmx1000m"
HADOOP_JOBTRACKER_OPTS="-Xmx1000m"


# Extra Java runtime options.  Empty by default.
#export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC -Dsecurity.audit.logger=INFO -Dhdfs.audit.logger=INFO ${HADOOP_NAMENODE_OPTS}"
#HADOOP_JOBTRACKER_OPTS="-Dsecurity.audit.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dmapred.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}"
#HADOOP_TASKTRACKER_OPTS="-Dsecurity.audit.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}"
#HADOOP_DATANODE_OPTS="-Dsecurity.audit.logger=ERROR,DRFAS ${HADOOP_DATANODE_OPTS}"

#export HADOOP_SECONDARYNAMENODE_OPTS="-Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_SECONDARYNAMENODE_OPTS}"

# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
#export HADOOP_CLIENT_OPTS="-Xmx128m ${HADOOP_CLIENT_OPTS}"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData ${HADOOP_JAVA_PLATFORM_OPTS}"

# On secure datanodes, user to run the datanode as after dropping privileges
export HADOOP_SECURE_DN_USER=hdfs

# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=/var/log/hadoop-logs

# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=$HADOOP_LOG_DIR

# The directory where pid files are stored. /tmp by default.
#export HADOOP_PID_DIR=/var/lib/hadoop/pid
export HADOOP_SECURE_DN_PID_DIR=$HADOOP_PID_DIR

# A string representing this instance of hadoop. $USER by default.
#export HADOOP_IDENT_STRING=$USER

12.4 slaves

内容如下:

# Typically you choose one machine in the cluster to act as the NameNode
#   and one machine as to act as the ResourceManager, exclusively.
# The rest of the machines act as both a DataNode and NodeManager and
#   are referred to as slaves.
# List all slave hostnames or IP addresses in your conf/slaves file, one per line.
cent7-n1.pepstack.com
cent7-n2.pepstack.com
cent7-n3.pepstack.com

12.5 hdfs-site.xml

<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
All Paramters:
    http://www.cnblogs.com/hujunfei/p/3518924.html

Apache Hadoop doc:
    https://hadoop.apache.org/docs/r2.7.1/
    https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html

Quorum Journal Manager HA:
    http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
    http://www.cnblogs.com/julyme/p/5196797.html

HDFS Tuning:
    http://www.cnblogs.com/zhq1007/p/5922282.html
-->
<configuration>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>true</value>
        <description>true is default</description>
    </property>

    <property>
        <name>dfs.permissions.superusergroup</name>
        <value>hdfs</value>
        <description></description>
    </property>

    <property>
        <name>dfs.support.append</name>
        <value>true</value>
        <description>Does HDFS allow appends to files?</description>
    </property>

    <!-- Quorum Journal Manager HA -->
    <property>
        <name>dfs.nameservices</name>
        <value>hacl</value>
        <description>unique identifiers for each NameNode in the nameservice.</description>
    </property>

    <property>
        <name>dfs.ha.namenodes.hacl</name>
        <value>n1,n2</value>
        <description>Configure with a list of comma-separated NameNode IDs.</description>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.hacl.n1</name>
        <value>cent7-n1.pepstack.com:8020</value>
        <description>the fully-qualified RPC address for each NameNode to listen on.</description>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.hacl.n2</name>
        <value>cent7-n2.pepstack.com:8020</value>
        <description>the fully-qualified RPC address for each NameNode to listen on.</description>
    </property>

    <property>
        <name>dfs.namenode.http-address.hacl.n1</name>
        <value>0.0.0.0:50070</value>
        <description>the fully-qualified HTTP address for each NameNode to listen on.</description>
    </property>

    <property>
        <name>dfs.namenode.http-address.hacl.n2</name>
        <value>0.0.0.0:50070</value>
        <description>the fully-qualified HTTP address for each NameNode to listen on.</description>
    </property>

    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://cent7-n1.pepstack.com:8485;cent7-n2.pepstack.com:8485;cent7-n3.pepstack.com:8485/hacl</value>
        <description>the URI which identifies the group of JNs where the NameNodes will write or read edits.</description>
    </property>

    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hacl/dfs/jn</value>
        <description>the path where the JournalNode daemon will store its local state.</description>
    </property>

    <property>
        <name>dfs.client.failover.proxy.provider.hacl</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        <description>the Java class that HDFS clients use to contact the Active NameNode.</description>
    </property>

    <!-- Automatic failover adds two new components to an HDFS deployment:
        - a ZooKeeper quorum;
        - the ZKFailoverController process (abbreviated as ZKFC).
        Configuring automatic failover:
    -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence(hdfs:22)</value>
        <description>a list of scripts or Java classes which will be used to fence the Active NameNode during a failover.</description>
    </property>

    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/var/lib/hadoop-hdfs/.ssh/id_rsa</value>
        <description>The sshfence option SSHes to the target node and uses fuser to kill the process
          listening on the service's TCP port. In order for this fencing option to work, it must be
          able to SSH to the target node without providing a passphrase. Thus, one must also configure the
          dfs.ha.fencing.ssh.private-key-files option, which is a comma-separated list of SSH private key files.
            private_key_files = '/var/lib/hadoop-hdfs/.ssh/id_rsa'
        </description>
    </property>

    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>

    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>

    <!-- Configurations for NameNode: -->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/hacl/dfs/nn</value>
        <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
    </property>

    <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
        <description>HDFS blocksize of 256MB for large file-systems.</description>
    </property>

    <property>
        <name>dfs.replication</name>
        <value>3</value>
        <description>3 copies of 1 data</description>
    </property>

    <property>
        <name>dfs.namenode.handler.count</name>
        <value>100</value>
        <description>More NameNode server threads to handle RPCs from large number of DataNodes. 30 default</description>
    </property>

    <property>
        <name>dfs.namenode.logging.level</name>
        <value>info</value>
        <description>The logging level for dfs namenode. Other values are "dir"(trac e namespace mutations), "block"(trace block under/over replications and block creations/deletions), or "all".</description>
    </property>

    <property>
        <name>dfs.namenode.decommission.interval</name>
        <value>30</value>
        <description>Namenode periodicity in seconds to check if decommission is complete.</description>
    </property>

    <property>
        <name>dfs.namenode.decommission.nodes.per.interval</name>
        <value>5</value>
        <description>The number of nodes namenode checks if decommission is complete in each dfs.namenode.decommission.interval.</description>
    </property>

    <property>
        <name>dfs.namenode.replication.interval</name>
        <value>3</value>
        <description>The periodicity in seconds with which the namenode computes repliaction work for datanodes.</description>
    </property>

    <property>
        <name>dfs.namenode.accesstime.precision</name>
        <value>3600000</value>
        <description>The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS.</description>
    </property>

    <!-- Configurations for DataNode: -->
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/hacl/dfs/dn</value>
        <description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
    </property>

    <property>
        <name>dfs.datanode.handler.count</name>
        <value>40</value>
        <description>The number of server threads for the datanode. 3 default</description>
    </property>

    <property>
        <name>dfs.datanode.du.reserved</name>
        <value>34359738368</value>
        <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use.</description>
    </property>

    <property>
        <!-- http://www.cnblogs.com/serendipity/archive/2011/08/23/2151031.html -->
        <name>dfs.datanode.failed.volumes.tolerated</name>
        <value>0</value>
        <description>The number of volumes that are allowed to fail before a datanode stops offering service.
          By default (0) any volume failure will cause a datanode to shutdown.
          !! CAUTION: This number must LESS THAN dfs.datanode.data.dir !!
        </description>
    </property>

    <property>
        <name>dfs.datanode.max.transfer.threads</name>
        <value>8192</value>
        <description>dfs.datanode.max.xcievers</description>
    </property>

    <property>
        <name>dfs.datanode.max.xcievers</name>
        <value>8192</value>
        <description>max file handles opened by datanode. 256 default</description>
    </property>

    <property>
        <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
        <value>true</value>
        <description></description>
    </property>

    <!-- Improve Performance for Local Reads -->
    <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>true</value>
        <description></description>
    </property>

    <property>
        <name>dfs.domain.socket.path</name>
        <value>/var/run/hadoop-hdfs/dn._PORT</value>
        <description></description>
    </property>

    <property>
        <name>dfs.client.read.shortcircuit.buffer.size</name>
        <value>8192</value>
    </property>

    <property>
        <name>dfs.client.read.shortcircuit.streams.cache.size</name>
        <value>8192</value>
        <description></description>
    </property>

    <property>
        <name>dfs.client.read.shortcircuit.streams.cache.size.expiry.ms</name>
        <value>1000</value>
        <description></description>
    </property>

    <property>
        <name>dfs.client.file-block-storage-locations.timeout.millis</name>
        <value>30000</value>
        <description></description>
    </property>

    <property>
        <name>hadoop.proxyuser.hdfs.groups</name>
        <value>*</value>
        <description>Set this to '*' to allow the gateway user to proxy any group.</description>
    </property>

    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
        <description>Set this to '*' to allow requests from any hosts to be proxied.</description>
    </property>

</configuration>

12.6 mapred-site.xml

<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Quorum Journal Manager HA:
  http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
  https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/ClusterSetup.html#Installation

  WebUI:
    NameNode                       http://nn_host:port/     Default HTTP port is 50070.
        http://cent7-n1.pepstack.com:50070
        http://cent7-n2.pepstack.com:50070

    ResourceManager                http://rm_host:port/     Default HTTP port is 8088.
        http://cent7-n1.pepstack.com:8088
        (http://cent7-n2.pepstack.com:8088)
        
    MapReduce JobHistory Server    http://jhs_host:port/    Default HTTP port is 19888.
        http://cent7-n1.pepstack.com:19888
-->
<configuration>
    <!-- Configurations for MapReduce Applications -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description>Execution framework set to Hadoop YARN.</description>
    </property>

    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>1536</value>
        <description>Larger resource limit for maps.</description>
    </property>

    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx1024M</value>
        <description>Larger heap-size for child jvms of maps.</description>
    </property>

    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>3072</value>
        <description>Larger resource limit for reduces.</description>
    </property>

    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx2560M</value>
        <description>Larger heap-size for child jvms of reduces.</description>
    </property>

    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>1024</value>
        <description>Higher memory-limit while sorting data for efficiency.</description>
    </property>

    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>128</value>
        <description>More streams merged at once while sorting files.</description>
    </property>

    <property>
        <name>mapreduce.reduce.shuffle.parallelcopies</name>
        <value>64</value>
        <description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description>
    </property>

    <!-- Configurations for MapReduce JobHistory Server: -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>cent7-n1.pepstack.com:10020</value>
        <description>MapReduce JobHistory Server host:port. Default port is 10020.</description>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>0.0.0.0:19888</value>
        <description>MapReduce JobHistory Server Web UI host:port. Default port is 19888.</description>
    </property>

    <property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/mr-history/tmp</value>
        <description>HDFS Directory where history files are written by MapReduce jobs.</description>
    </property>

    <property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/mr-history/done</value>
        <description>HDFS Directory where history files are managed by the MR JobHistory Server.</description>
    </property>

    <!-- see hive-site.xml::hive.mapred.reduce.tasks.speculative.execution -->
    <property>
        <name>mapreduce.map.speculative</name>
        <value>false</value>
    </property>

    <property>
        <name>mapreduce.reduce.speculative</name>
        <value>false</value>
        <description>Whether speculative execution for reducers should be turned on. require restart hadoop-yarn-resourcemanager.</description>
    </property>

    <!-- for test:
      http://blog.csdn.net/chndata/article/details/46003399
      http://www.cnblogs.com/ryanyang/articles/2955619.html
      http://blog.csdn.net/knowledgeaaa/article/details/12502373
      http://blog.csdn.net/bruce_wang_janet/article/details/7281031
    -->
    <property>
        <name>mapred.job.tracker</name>
        <value>cent7-n1.pepstack.com:9001</value>
    </property>

    <property>
        <name>mapred.job.tracker.http.address</name>
        <value>0.0.0.0:50030</value>
    </property>

    <property>
        <name>mapred.job.tracker.handler.count</name>
        <value>20</value>
    </property>

    <property>
        <name>mapred.system.dir</name>
        <value>${hadoop.tmp.dir}/mapred/system</value>
        <description>?</description>
    </property>

    <property>
        <name>mapred.tasktracker.map.tasks.maximum</name>
        <value>10</value>
        <description>CPUs</description>
    </property>

    <property>
        <name>mapred.tasktracker.reduce.tasks.maximum</name>
        <value>5</value>
        <description>0.5 x CPUs</description>
    </property>

    <property>
        <name>mapred.map.tasks</name>
        <value>8</value>
        <description>0.75 x CPUs</description>
    </property>

    <property>
        <name>mapred.reduce.tasks</name>
        <value>9</value>
        <description>0.95 x reduce.tasks.maximum</description>
    </property>

    <property>
        <name>mapred.child.java.opts</name>
        <value>600</value>
        <description>default 200. MEMs / CPUs</description>
    </property>

    <property>
        <name>io.sort.mb</name>
        <value>500</value>
        <description>default 100. must be less than mapred.child.java.opts</description>
    </property>

    <property>
        <name>io.sort.factor</name>
        <value>100</value>
        <description>10 default</description>
    </property>

    <property>
        <name>mapred.reduce.parallel.copies</name>
        <value>20</value>
        <description>10 default</description>
    </property>

    <property>
        <name>mapred.map.child.java.opts</name>
        <value>-Xmx512M</value>
        <description>Larger heap-size for child jvms of maps.</description>
    </property>

    <property>
        <name>mapred.reduce.child.java.opts</name>
        <value>-Xmx512M</value>
        <description>Larger heap-size for child jvms of maps.</description>
    </property>
</configuration>

12.7 yarn-site.xml

<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Quorum Journal Manager HA:
  http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
  https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/ClusterSetup.html#Installation
-->
<configuration>

    <!-- Configurations for ResourceManager and NodeManager -->
    <property>
        <name>yarn.acl.enable</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.admin.acl</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <!-- Configurations for ResourceManager:
        http://dongxicheng.org/mapreduce-nextgen/hadoop-yarn-configurations-resourcemanager-nodemanager
        http://debugo.com/yarn-rm-ha/
    -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
        <description>Enable automatic failover; By default, it is enabled only when HA is enabled.</description>
    </property>

    <!--
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>
        <value>/yarn-leader-election</value>
        <description>Optional setting. The default value is yarn-leader-election</description>
    </property>

    <property>
        <name>yarn.client.failover-proxy-provider</name>
        <value>org.apache.hadoop.yarn.client.RMFailoverProxyProvider</value>
    </property>
    -->

    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>hacl</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>n1,n2</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.n1</name>
        <value>cent7-n1.pepstack.com</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.n2</name>
        <value>cent7-n2.pepstack.com</value>
    </property>

    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>cent7-n1.pepstack.com:2181,cent7-n2.pepstack.com,cent7-n3.pepstack.com</value>
    </property>

    <property>
        <name>yarn.resourcemanager.address.n1</name>
        <value>cent7-n1.pepstack.com:8032</value>
    </property>

    <property>
        <name>yarn.resourcemanager.address.n2</name>
        <value>cent7-n2.pepstack.com:8032</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address.n1</name>
        <value>cent7-n1.pepstack.com:8030</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address.n2</name>
        <value>cent7-n2.pepstack.com:8030</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address.n1</name>
        <value>cent7-n1.pepstack.com:8031</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address.n2</name>
        <value>cent7-n2.pepstack.com:8031</value>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address.n1</name>
        <value>cent7-n1.pepstack.com:8033</value>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address.n2</name>
        <value>cent7-n2.pepstack.com:8033</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address.n1</name>
        <value>cent7-n1.pepstack.com:8088</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address.n2</name>
        <value>cent7-n2.pepstack.com:8088</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
        <description>ResourceManager Scheduler class.
            CapacityScheduler (recommended),
            FairScheduler (also recommended),
            or FifoScheduler</description>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>
        <value>50</value>
        <description></description>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.client.thread-count</name>
        <value>50</value>
        <description></description>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
        <description>Minimum limit of memory to allocate to each container request at the Resource Manager. In MBs</description>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>16384</value>
        <description>Maximum limit of memory to allocate to each container request at the Resource Manager. In MBs</description>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>32</value>
    </property>

    <property>
        <name>yarn.resourcemanager.nodemanagers.heartbeat-interval-ms</name>
        <value>1000</value>
    </property>

    <!-- refresh can work:
    <property>
        <name>yarn.resourcemanager.nodes.include-path</name>
        <value></value>
        <description>List of permitted NodeManagers.
            If necessary, use these files to control the list of allowable NodeManagers.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.nodes.exclude-path</name>
        <value></value>
        <description>List of permitted/excluded NodeManagers.
            If necessary, use these files to control the list of allowable NodeManagers.</description>
    </property>
    -->

    <!-- Configurations for NodeManager -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>8192</value>
        <description>Resource i.e. available physical memory, in MB, for given NodeManager.
            Defines total available resources on the NodeManager to be made available to running containers
        </description>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>2.1</value>
        <description>
           Maximum ratio by which virtual memory usage of tasks may exceed physical memory.
           The virtual memory usage of each task may exceed its physical memory limit by this ratio.
           The total amount of virtual memory used by tasks on the NodeManager may exceed its physical
           memory usage by this ratio.
        </description>
    </property>

    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>8</value>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>file:///var/lib/hadoop-yarn/cache/${user.name}/nm-local-dir</value>
        <description>Comma-separated list of paths on the local filesystem where intermediate data is written.
            Multiple paths help spread disk IO.
        </description>
    </property>

    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>file:///var/log/hadoop-yarn/containers</value>
        <description>Comma-separated list of paths on the local filesystem where logs are written.
        Multiple paths help spread disk IO.</description>
    </property>

    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>hdfs://var/log/hadoop-yarn/apps</value>
        <description>HDFS directory where the application logs are moved on application completion.
        Need to set appropriate permissions. Only applicable if log-aggregation is enabled.</description>
    </property>

    <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>10800</value>
        <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
    </property>

    <property>
        <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
        <value>logs</value>
        <description>Suffix appended to the remote log dir. Logs will be aggregated to
           remote-app-log-dir Only applicable if log-aggregation is enabled.</description>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
        <description></description>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        <description></description>
    </property>


    <!-- Configurations for History Server (Needs to be moved elsewhere) -->
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>-1</value>
        <description>How long to keep aggregation logs before deleting them. -1 disables.
            Be careful, set this too small and you will spam the name node.</description>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-check-interval-seconds</name>
        <value>-1</value>
        <description>Time between checks for aggregated log retention.
            If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time.
            Be careful, set this too small and you will spam the name node.</description>
    </property>

    <property>
        <name>yarn.application.classpath</name>
        <value>
            $HADOOP_CONF_DIR,
            $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
            $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
            $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
            $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
        </value>
        <description>Classpath for typical applications.</description>
    </property>
</configuration>

12.8 yarn-env.sh

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# User for YARN daemons
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}

# resolve links - $0 may be a softlink
export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"

# some Java parameters
export JAVA_HOME=/usr/local/java/jdk1.8.0_152

if [ "$JAVA_HOME" != "" ]; then
    #echo "run java in $JAVA_HOME"
    JAVA_HOME=$JAVA_HOME
fi

if [ "$JAVA_HOME" = "" ]; then
    echo "Error: JAVA_HOME is not set."
    exit 1
fi

JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1024m

# For setting YARN specific HEAP sizes please use this
# Parameter and set appropriately
# YARN_HEAPSIZE=1000

# check envvars which might override default args
if [ "$YARN_HEAPSIZE" != "" ]; then
    JAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"
fi

# Resource Manager specific parameters

# Specify the max Heapsize for the ResourceManager using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to 1000.
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_RESOURCEMANAGER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_RESOURCEMANAGER_HEAPSIZE=1000

# Specify the max Heapsize for the timeline server using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to 1000.
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_TIMELINESERVER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_TIMELINESERVER_HEAPSIZE=1000

# Specify the JVM options to be used when starting the ResourceManager.
# These options will be appended to the options specified as YARN_OPTS
# and therefore may override any similar flags set in YARN_OPTS
#export YARN_RESOURCEMANAGER_OPTS=

# Node Manager specific parameters

# Specify the max Heapsize for the NodeManager using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to 1000.
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_NODEMANAGER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_NODEMANAGER_HEAPSIZE=1000

# Specify the JVM options to be used when starting the NodeManager.
# These options will be appended to the options specified as YARN_OPTS
# and therefore may override any similar flags set in YARN_OPTS
#export YARN_NODEMANAGER_OPTS=

# so that filenames w/ spaces are handled correctly in loops below
IFS=


# default log directory & file
if [ "$YARN_LOG_DIR" = "" ]; then
    YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
fi
if [ "$YARN_LOGFILE" = "" ]; then
    YARN_LOGFILE='yarn.log'
fi

# default policy file for service-level authorization
if [ "$YARN_POLICYFILE" = "" ]; then
    YARN_POLICYFILE="hadoop-policy.xml"
fi

# restore ordinary behaviour
unset IFS


YARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME"
YARN_OPTS="$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING"
YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
    YARN_OPTS="$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
fi
YARN_OPTS="$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE"

12.9 capacity-scheduler.xml

此文件必须存在(默认没有更改配置)!

<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

  <property>
    <name>yarn.scheduler.capacity.maximum-applications</name>
    <value>10000</value>
    <description>
      Maximum number of applications that can be pending and running.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
    <value>0.1</value>
    <description>
      Maximum percent of resources in the cluster which can be used to run 
      application masters i.e. controls number of concurrent running
      applications.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
    <description>
      The ResourceCalculator implementation to be used to compare 
      Resources in the scheduler.
      The default i.e. DefaultResourceCalculator only uses Memory while
      DominantResourceCalculator uses dominant-resource to compare 
      multi-dimensional resources such as Memory, CPU etc.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.queues</name>
    <value>default</value>
    <description>
      The queues at the this level (root is the root queue).
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.capacity</name>
    <value>100</value>
    <description>Default queue target capacity.</description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
    <value>1</value>
    <description>
      Default queue user limit a percentage from 0.0 to 1.0.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
    <value>100</value>
    <description>
      The maximum capacity of the default queue. 
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.state</name>
    <value>RUNNING</value>
    <description>
      The state of the default queue. State can be one of RUNNING or STOPPED.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
    <value>*</value>
    <description>
      The ACL of who can submit jobs to the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
    <value>*</value>
    <description>
      The ACL of who can administer jobs on the default queue.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.node-locality-delay</name>
    <value>40</value>
    <description>
      Number of missed scheduling opportunities after which the CapacityScheduler 
      attempts to schedule rack-local containers. 
      Typically this should be set to number of nodes in the cluster, By default is setting 
      approximately number of nodes in one rack which is 40.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.queue-mappings</name>
    <value></value>
    <description>
      A list of mappings that will be used to assign jobs to queues
      The syntax for this list is [u|g]:[name]:[queue_name][,next mapping]*
      Typically this list will be used to map users to queues,
      for example, u:%user:%user maps all users to queues with the same name
      as the user.
    </description>
  </property>

  <property>
    <name>yarn.scheduler.capacity.queue-mappings-override.enable</name>
    <value>false</value>
    <description>
      If a queue mapping is present, will it override the value specified
      by the user? This can be used by administrators to place jobs in queues
      that are different than the one specified by the user.
      The default is false.
    </description>
  </property>

</configuration>

13 ZKFC 配置 (hadoop-hdfs-zkfc)

a. 在每个namenode机器上(n1, n2) 执行:

# su - hdfs -c "mkdir -p /var/lib/hadoop-hdfs/.ssh"
# su - hdfs -c "chmod 755 /var/lib/hadoop-hdfs/.ssh"
# su - hdfs -c "ssh-keygen -t rsa -P '' -f /var/lib/hadoop-hdfs/.ssh/id_rsa"
# su - hdfs -c "echo 'StrictHostKeyChecking no' > /var/lib/hadoop-hdfs/.ssh/config"
# su - hdfs -c "cat /var/lib/hadoop-hdfs/.ssh/id_rsa.pub > /var/lib/hadoop-hdfs/.ssh/authorized_keys"
# su - hdfs -c "chmod 644 /var/lib/hadoop-hdfs/.ssh/authorized_keys"
# su - hdfs -c "chmod 644 /var/lib/hadoop-hdfs/.ssh/config"
# su - hdfs -c "cat /var/lib/hadoop-hdfs/.ssh/id_rsa.pub"


b. 然后在任何一个namenode上执行:

# su - hdfs -c "hdfs zkfc -formatZK -force"

这样ZKFC就创建好了。

c. 格式化 namenode:

格式化一个集群首先确保 zookeeper-server 集群已经启动, 

然后把每个节点上的  journalnode 启动, 关闭每个节点上的zkfc, datanode,关闭全部namenode。然后按下面的过程格式化:

# service hadoop-hdfs-journalnode start

在任何一个namenode(如n1):

# su - hdfs -c "hdfs namenode -format -clusterid hacl -force -nonInteractive"
# service hadoop-hdfs-namenode start

必须把n1上的 hadoop-hdfs-namenode启动才能进行下面的过程。 然后登录到另一个namenode(n2), 执行:

# su - hdfs -c "hdfs namenode -bootstrapStandby -force -nonInteractive"
# service hadoop-hdfs-namenode start

d. 在所有datanode上(n1,n2,n3)启动datanode:

# service hadoop-hdfs-datanode start

e.所有zkfc节点上(n1,n2)启动:

# service hadoop-hdfs-zkfc start

f. 创建mr-history 目录:

在所有jn和nn和dn都正确启动之后(参考mapred-site.xml):

# su - hdfs -c "hdfs dfs -mkdir -p /mr-history/tmp /mr-history/done"

14 Hadoop + YARN + Mapred 安装配置完毕

启动全部节点上的服务。











猜你喜欢

转载自blog.csdn.net/cheungmine/article/details/78805534