搭建NameNode联盟

一、NameNode的联盟(Federation)

  • 接收客户端的请求

  • 缓存1000M的元信息

  • 问题:
    (1)分摊NameNode压力
    (2)缓存更多的元信息

    搭建NameNode的联盟

    1、规划
    NameNode:bigdata112 bigdata113
    DataNode: bigdata114 bigdata115

    2、在bigdata112上搭建
    core-site.xml

				<property>
					<name>hadoop.tmp.dir</name>
					<value>/root/training/hadoop-2.7.3/tmp</value>
				</property>

hdfs-site.xml

				<property>
					<name>dfs.nameservices</name>
					<value>ns1,ns2</value>
				</property>
				
				<property>
					<name>dfs.namenode.rpc-address.ns1</name>
					<value>192.168.157.112:9000</value>
				</property>	

				<property>
					<name>dfs.namenode.http-address.ns1</name>
					<value>192.168.157.112:50070</value>
				</property>					

				<property>
					<name>dfs.namenode.secondaryhttp-address.ns1</name>
					<value>192.168.157.112:50090</value>
				</property>	

				<property>
					<name>dfs.namenode.rpc-address.ns2</name>
					<value>192.168.157.113:9000</value>
				</property>	

				<property>
					<name>dfs.namenode.http-address.ns2</name>
					<value>192.168.157.113:50070</value>
				</property>					

				<property>
					<name>dfs.namenode.secondaryhttp-address.ns2</name>
					<value>192.168.157.113:50090</value>
				</property>		

				<property>
					<name>dfs.replication</name>
					<value>2</value>
				</property>					
			
				<property>
					<name>dfs.permissions</name>
					<value>false</value>
				</property>	

mapred-site.xml

				<property>
					<name>mapreduce.framework.name</name>
					<value>yarn</value>
				</property>

yarn-site.xml

				<property>
					<name>yarn.resourcemanager.hostname</name>
					<value>192.168.157.112</value>
				</property>		

				<property>
					<name>yarn.nodemanager.aux-services</name>
					<value>mapreduce_shuffle</value>
				</property>	

slaves

			bigdata114
			bigdata115

配置路由规则(viewFS)
修改core-site.xml文件,直接加入以下内容:
注意:xdl1:是联盟的名字

				<property>
					<name>fs.viewfs.mounttable.xdl1.homedir</name>
					<value>/home</value>
				</property>

				<property>
					<name>fs.viewfs.mounttable.xdl1.link./movie</name>
					<value>hdfs://192.168.157.112:9000/movie</value>
				</property>

				<property>
					<name>fs.viewfs.mounttable.xdl1.link./mp3</name>
					<value>hdfs://192.168.157.113:9000/mp3</value>
				</property>

				<property>
					<name>fs.default.name</name>
					<value>viewfs://xdl1</value>
				</property>		

注意:如果路由规则太多了,造成core-site.xml文件不好维护
可以单独创建一个xml文件来保存路由规则:mountTable.xml ----> 加入到core-site.xml
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ViewFs.html

3、复制到其他的节点
scp -r hadoop-2.7.3/ root@bigdata113:/root/training
scp -r hadoop-2.7.3/ root@bigdata114:/root/training
scp -r hadoop-2.7.3/ root@bigdata115:/root/training

4、在每个NameNode(bigdata112和bigdata113)上单独进行格式化
hdfs namenode -format -clusterId xdl1
5、启动

6、根据路由规则,在每个Namenode上创建对应的目录
hadoop fs -mkdir hdfs://192.168.157.112:9000/movie
hadoop fs -mkdir hdfs://192.168.157.113:9000/mp3

7、操作HDFS
[root@bigdata112 training]# hdfs dfs -ls /
Found 2 items
-r-xr-xr-x - root root 0 2018-10-05 01:11 /movie
-r-xr-xr-x - root root 0 2018-10-05 01:11 /mp3

注意:看到的是viewFS

猜你喜欢

转载自blog.csdn.net/Pengxiaozhen1111/article/details/88095914