hadoop-2.9.2 HA Cluster Setup

To ensure that each machine has jdk, following is the machine configuration.

Application / hostname zk1 ZK2 zk3 namenode1 namenode2 datanode1 datanode2 datanode3  
zookeeper Y  Y Y            
namenode       Y Y        
DataNode           Y Y Y  
journalnode           Y Y Y  
zkFC       Y Y        
resourcemanger       Y Y        
nodemanager           Y Y Y  
                   
                   

 

1,3 units zk, in front of the blog has been set up well, I'm just a few clone, changed ip, it does not affect use.

Start 3 zk.

View state if it is above the state, it would be ok.

2, prepare two namenode1, namenode2

  My position hadoop

Hadoop configuration file into the directory, cd /hadoop-2.9.2/etc/hadoop

 

1) .vi hdfs-site.xml

<property>
<name>dfs.nameservices</name>
<value>laolong</value>
</property>
<property>
<name>dfs.ha.namenodes.laolong</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.laolong.nn1</name>
<value>namenode1:8020</value>  //两台namenode
</property>
<property>
<name>dfs.namenode.rpc-address.laolong.nn2</name>
<value>namenode2:8020</value> 
</property>
<property>
<name>dfs.namenode.http-address.laolong.nn1</name>
<value>namenode1:50070</value> 
</property>
<property>
<name>dfs.namenode.http-address.laolong.nn2</name>
<value>namenode2:50070</value> 
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://datanode1:8485;datanode2:8485;datanode3:8485/abc</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.laolong</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/journalnode</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>

2) vi core-site.xml

<property>
<name>fs.defaultFS</name>
<value>hdfs://laolong</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.9</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>zk1:2181,zk2:2181,zk3:2181</value>
</property>

3) .We slaves

 

 4) .vi yarn-site.xml

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>lyhadoop</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>namenode1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>namenode2</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>zk1:2181,zk2:2181,zk2:2181</value>
</property>

 5) .we hadoop-env.sh

 

 . 6) Start the three JournalNode: ./ hadoop-daemon.sh start journalnode

 . 7) in which a namenode the format: HDFS namenode -format

8). Formatted file copy to another namenode, can be copied by way scp

. 9) in which a namenode the initialization zkfc : HDFS zkfc -formatZK

10) Start the cluster:. Start-dfs.sh stop-dfs.sh stop

After starting, accessed through a browser

Access to another namenode

 

 If the active of namenode kill, see if you can automatically switch it?

namenode1 not visit

 

 That namenode2 it?

 

 

 This realization of switching namenode.

11).start-yarn.sh

After starting, as follows

This completes HA Cluster Setup. I just recorded to learn, a lot do not understand, I hope everyone pointing.

 

Guess you like

Origin www.cnblogs.com/longyao/p/11430280.html