完全分布式搭建

将/opt/hadoop-3.1.1 scp到node2、node3、node4的对应目录中

scp -r hadoop-3.1.1/ node2:`pwd`
scp -r hadoop-3.1.1/ node3:`pwd`
scp -r hadoop-3.1.1/ node4:`pwd`

将/root/下的jdk.rpm scp到node2、node3、node4的对应目录中

scp jdk-8u172-linux-x64.rpm node2:`pwd`
scp jdk-8u172-linux-x64.rpm node3:`pwd`
scp jdk-8u172-linux-x64.rpm node4:`pwd`

在node2、node3、node4上安装jdk并配置profile文件

rpm -ivh jdk-8u172-linux-x64.rpm

/etc/profile

将node1的/etc/profile拷贝到node2、node3、node4上并执行. /etc/profile

修改node1:/opt/hadoop-3.1.1/etc/hadoop/中的workers:

node2
node3
node4

修改node1:/opt/hadoop-3.1.1/etc/hadoop/中的hdfs-site.xml:

<property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node2:9868</value>
</property>

修改node1:/opt/hadoop-3.1.1/etc/hadoop/中的core-site.xml:

<property>
        <name>hadoop.tmp.dir</name>
        <value>/var/bjsxt/hadoop/full</value>
</property>

将这四个文件core-site.xml/hdfs-site.xml/workers/hadoop-env.sh在四台服务器之间共享

格式化

启动即可

猜你喜欢

转载自blog.csdn.net/qq_18532033/article/details/87743279