【大数据】分布式集群部署

1、集群规划部署

节点名称 NN1 NN2 DN  RM NM
hadoop01 NameNode   DataNode   NodeManager
hadoop02   SecondaryNameNode DataNode ResourceManager NodeManager
hadoop03     DataNode   NodeManager

 2、参考单机部署,拷贝安装目录至相同目录,使用ln -s 建立软连接

 

3、修改配置文件参数及sh启动文件--根据集群规划部署配置

 

slaves:记录了机器名

*.sh:修改JAVA_HOME

yarn-site.xml 

<configuration>

<!-- Site specific YARN configuration properties -->
          <!-- NodeManager获取数据的方式是shuffle-->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
          <!-- 指定YARN的老大(resourcemanager)的地址 -->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>hadoop02</value>
        </property>
</configuration>

hdfs-site.xml 

<configuration>
        <!-- 指定HDFS保存数据副本数量 --> 
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
        <description>
            If "true", enable permission checking in HDFS.
            If "false", permission checking is turned off,
            but all other behavior is unchanged.
            Switching from one parameter value to the other does not change the mode,
            owner or group of files or directories.
        </description>
    </property>
    <!-- 设置secondname的端口   -->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop02:50090</value>
    </property>
</configuration>

mapred-site.xml 

<configuration>
          <!-- 告诉hadoop以后MR运行在yarn上 -->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>

core-site.xml

<configuration>
        <!-- 用来指定hdfs的老大(NameNode)的地址 -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop01:9000</value>
        </property>
          <!-- 用来指定Hadoop运行时产生文件的存放目录 -->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/hadoop/tmp</value>
        </property>
</configuration>

4、由于是在单机基础上升级扩展,需要删除hadoop.tmp.dir目录文件,并用root授权 chmod 777 -R /hadoop

5、重新格式化:hdfs namenode -foamate

6、配置拷贝:scp -r /home/hadoop/Soft/hadoop-2.7.6/etc/hadoop hadoop@hadoop03:/home/hadoop/Soft/hadoop-2.7.6/etc/

7、Hadoop01:start-dfs.sh

8、Hadoop02:start-yarn.sh

10、使用jps查看进程

参考:

https://blog.csdn.net/frank409167848/article/details/80968531

https://www.cnblogs.com/frankdeng/p/9047698.html

猜你喜欢

转载自www.cnblogs.com/defineconst/p/10982576.html