基于前一篇:Hadoop安装与配置进行修改 http://liumangafei.iteye.com/blog/2303359
修改:core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/hadoop/tmp/hadoop-2.6.4</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration>
修改:hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///usr/hadoop/tmp/hadoop-2.6.4/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///usr/hadoop/tmp/hadoop-2.6.4/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>hadoop1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>hadoop2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>hadoop1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>hadoop2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop2:8485;hadoop3:8485;hadoop4:8485;hadoop5:8485;hadoop6:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/hadoop/journalnode</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>hadoop2:2181,hadoop3:2181,hadoop4:2181</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> </configuration>
创建对应的journalnode文件夹
配置:
1、删除hadoop对应的logs、name、data、journalnode对应的内容
2、执行:bin/hdfs zkfc -formatZK // 格式化zkfc
3、所有journalnode执行:sbin/hadoop-daemon.sh start journalnode // 启动所有journalnode节点
4、执行:bin/hdfs namenode -format // 格式化当前的namenode
5、拷贝dfs/name下的内容拷贝到另一个namenode下 // 拷贝namenode信息到另一个namenode
6、执行:sbin/start-all.sh // 启动hadoop