hadoop fully distributed mode of operation to build

ip name correspondence table

192.168.1.43    

rjsoft-0001

192.168.1.99

rjsoft-0002

192.168.1.113

rjsoft-0003

Configuration Table

 

rjsoft-0001

rjsoft-0002

rjsoft-0003

HDFS

 

NameNode

DataNode

 

DataNode

SecondaryNameNode

DataNode

YARN

 

NodeManager

ResourceManager

NodeManager

 

NodeManager

 

1, configure ssh login-free secret

2, modify the file configuration file

(1) core profile

Placed core-site.xml

[csg@rjsoft-0001 hadoop]$ vi core-site.xml

Write the following in the configuration file

<! - Specifies the HDFS NameNode address -> 
<Property> 
        <name> fs.defaultFS </ name> 
      <value> HDFS: // rjsoft-0001: 9000 </ value> 
</ Property> 

! <- generating a file storage directory specified run Hadoop -> 
<Property> 
        <name> hadoop.tmp.dir </ name> 
        <value> /opt/module/hadoop-2.7.2/data/tmp </ value> 
</ property>

(2) HDFS profile

  Configuration hadoop-env.sh

[csg@rjsoft-0001 hadoop]$ vi hadoop-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_144

  Configuring hdfs-site.xml

[csg@rjsoft-0001 hadoop]$ vi hdfs-site.xml
  <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>

    <!-- 指定Hadoop辅助名称节点主机配置 -->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>rjsoft-0003:50090</value>
    </property>
    

(3) YARN profile

  Configuration yarn-env.sh

[csg@rjsoft-0001 hadoop]$ vi yarn-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_144

  Located yarn-site.xml

[csg@rjsoft-0001 hadoop]$ vi yarn-site.xml

  Increase follows the file

<!-- Reducer获取数据的方式 -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>

<!-- 指定YARN的ResourceManager的地址 -->
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>rjsoft-0002</value>
</property>

(4) MapReduce profile

  Configuration mapred-env.sh

[csg@rjsoft-0001 hadoop]$ vi mapred-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_144

  Placed mapred-site.xml

[csg@rjsoft-0001 hadoop]$ cp mapred-site.xml.template mapred-site.xml

[csg@rjsoft-0001 hadoop]$ vi mapred-site.xml

Increase follows the file

<!-- 指定MR运行在Yarn上 -->
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

3. Good distribution on the cluster configuration Hadoop configuration file

[csg@rjsoft-0001 hadoop]$ xsync /opt/module/hadoop-2.7.2/

4. Check the status of documentation

[csg@rjsoft-0002 hadoop]$ cat /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml

Configuration slaves

/opt/module/hadoop-2.7.2/etc/hadoop/slaves
[csg@rjsoft-0001 hadoop]$ vi slaves

Add file (three machines will need to be added)

rjsoft-0001 
rjsoft -0002 
rjsoft -0003

Note: Do not allow the end of the contents of the file, add the spaces, empty lines are not allowed in the file.

The cluster starts

1, single-node start

1 single node hadoop start and stop: 
Reference URL: HTTPS: // www.cnblogs.com/xym4869/p/8821312.html 

enter hadoop directory: 

Formatting the NameNode: bin / HDFS the NameNode - format (only the first time you start need to format) 

to start the NameNode: sbin / hadoop- daemon.sh start the NameNode 

start DataNode: sbin / hadoop- daemon.sh start Datanode 

see if started successfully: jps 

stop the NameNode: sbin / hadoop- daemons.sh sTOP the NameNode 

stop DataNode : sbin / hadoop- daemons.sh STOP dataname 

using a web terminal access: HTTP: // IP address: 9870

yarn

YARN starting and running a single node in MapReduce
 1 ) First, to ensure start and NameNode DataNodes
 2) Start ResourceManager sbin / yarn- daemon.sh Start ResourceManager
 . 3) sbin start the NodeManager / yarn- daemon.sh Start nodemanager
 . 4) Close ResourceManager sbin / yarn- STOP the ResourceManager daemon.sh
 5) sbin close the nodeManager / yarn- daemon.sh STOP nodemanager
 6) YARN browser to view the page: HTTP: // IP address: 8088 / cluster

2, the cluster starts

Enter hadoop directory sbin directory 

( 1) the overall start / stop HDFS 

              Start -dfs.sh / STOP- dfs.sh 

( 2) the overall start / stop YARN 

              Start -yarn.sh / stop-yarn.sh

 

 

 

Guess you like

Origin www.cnblogs.com/csgbpd/p/12510612.html