部署Hadoop的HDFS分布式文件系统

本实验的目的:br/>搭建hadoop的HDFS,通过DataNode节点的添加与删除实现HDFS空间动态增加与减少,以及HDFS文件系统的基本管理。
实验环境:
![](http://i2.51cto.com/images/blog/201805/22/06bebf6b0fd101e0343e5d3045cec8dc.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
虚拟机环境:
![](http://i2.51cto.com/images/blog/201805/22/1c5a8616dc90bb134c9db5a61024b083.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
实验步骤:
1、准备环境
1)master、slave1~3上配置域名解析与主机名
![](http://i2.51cto.com/images/blog/201805/22/55c007c29d3126714e0fdbd736091d6b.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/bce7091e71c1f03c860628cb1160cb2d.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/be0b94fecb8affe321bc8e09eac6df5f.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/9cac555dd953a9c333e30b8401d8b76b.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/b1a92b56fa2034ba0b407999f93fc9d1.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
在192.168.0.11的主机上:
![](http://i2.51cto.com/images/blog/201805/22/72f442752e97833a6be24b65468f6b67.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/7824adbf8865eb1027826dc2e216e34e.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/e0ed72e71df3dd73ca1b6d12e00f8a3f.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
在192.168.0.12的主机上:
![](http://i2.51cto.com/images/blog/201805/22/adb157b07428868100c6d8ecf309e2bc.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/84167c2b2ededb320d83d6068699ad4f.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/0a53a1fd12edb96b35a70d926de0782b.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
在192.168.0.13的主机上:
![](http://i2.51cto.com/images/blog/201805/22/4a7bf9190141e5eaaa3ffa79d16333e1.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/cb1eb41cac69ac3659e3a81f9b4e70ef.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/00d0124e4561da144ca6a0a51310a78b.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
2)JDK安装
![](http://i2.51cto.com/images/blog/201805/22/48ef8cda0f0f0a0c8e69a4e3c3c1c803.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/80d8d33f1c4ae37f5b443550629f9d90.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/85517effe47a65331556b8e4a7aea950.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
3)添加hadoop运行用户
![](http://i2.51cto.com/images/blog/201805/22/1ed5ea26442cb548e1113a678fa95d8e.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
同理:在slave1-slave3上重复步骤2)和步骤3),这里就不在截图。
2、配置SSH密钥对
要求master免密码登录各slave,用于开启相应服务。
Master无密码连接slave,先切换hadoop用户,用ssh-keygen按照默认配置直接按Enter键生成密钥对,通过ssh-copy-id将公钥复制至3个slave主机中,复制过程需要输入slave主机的hadoop用户密码,作用是master远程启动slave。
![](http://i2.51cto.com/images/blog/201805/22/1cf013ec48b6b663da9b36e93e62979f.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/88bd6b5973738bbd319792bcc874750c.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/a083b04afebfb5cbe11522e44d15a49e.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/05c34d3ac12df66d43bcfe08fd46fe3a.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
3、安装hadoop,在master和slave上配置环境变量。
1)安装hadoop
![](http://i2.51cto.com/images/blog/201805/22/0de3c50c3cdc8d2adeb79752a445b645.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/071ce09a879ef0147b9df0dbf678eaf5.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
2)配置环境变量
![](http://i2.51cto.com/images/blog/201805/22/7d061529d8052cac984c4ebf7b5e7ece.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
3)配置hadoop
![](http://i2.51cto.com/images/blog/201805/22/ebc5cff33afebd7d0b0f674561d74236.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/95184e05ec56f051fec7f48d6adbc6f0.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/1f487aba701f73c44d4fd8671725c533.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/3cdd0d87711a2e393bc87096fd19d335.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/586b46797914dfdd30fa1c5f5bce1cca.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/f02b4dc9dd9ab51d873c74ca498141da.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/4ecc54bf87937300dfd7335cbcdb5d94.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/94df8c600133b3636b09321d1fb0a311.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/29c34b21e813967435dd934200d422ad.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/7049158b191ebdd2599a5cff6fcea61e.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/df06550f61b1e3db09e04f986ebd520c.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/98e6734949f7c5eb03f9da6834fd5ed2.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/5f93fb7d7ecc9b752b97d37b0e75fb27.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
每个slave主机重复步骤1)和步骤2)(即它们也需要安装hadoop并设置环境变量),等前两步完成了再由master通过SSH安全通道把刚才配置的6个文件复制给每个slave。
每个slave主机上重复1)和2)步骤的截图这里省略,请参考上面去做,命令全部一样
![](http://i2.51cto.com/images/blog/201805/22/9fa93cf1c4ebdfb76157bc654f7dc61e.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/f5c57f1739629d4ec4b8bafebc25331f.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/a3946c1b21e3d0f677ab989671215cf3.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
4、使用HDFS初始化master
1)格式化HDFS文件系统
![](http://i2.51cto.com/images/blog/201805/22/2e10c381f2d9403651f190b6f756b5f9.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
![](http://i2.51cto.com/images/blog/201805/22/71025526cfa69c59cd5a5b1c94ad7636.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
2)检查新生成的目录
![](http://i2.51cto.com/images/blog/201805/22/7ce142e67ba6b714017f54f275b0be91.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
3)启动hadoop群集(开机hadoop的冗余功能)
启停hadoopde的管理命令位于@HADOOP_HOME/sbin下,以start-*或stop-*开头;单独启动HDFS分布式文件系统可使用start-dfs.sh,也可以使用以下命令启动整个hadoop集群。
部署Hadoop的HDFS分布式文件系统
4)验证访问
通过浏览器查看NameNode,即master,访问http://192.168.0.10:50070,可以查看视图统计信息和HDFS存储信息等。
验证之前先关闭master和所有slave的防火墙
部署Hadoop的HDFS分布式文件系统
部署Hadoop的HDFS分布式文件系统
部署Hadoop的HDFS分布式文件系统
5)hadoop基本命令
使用“hadoop fs”命令可以结合普通的文件管理命令实现很多操作,如查看、修改权限、统计、获取帮助、创建、删除、上传下载文件等,更多的用法可使用“hadoop fs -help”或“hadoop fs -usage”命令查看。
部署Hadoop的HDFS分布式文件系统
部署Hadoop的HDFS分布式文件系统
............
5、为HDFS集群添加节点
个人觉得上面的步骤都能了解的话,添加节点就很easy了,如果还有疑问的话,请留言!!

猜你喜欢

转载自blog.51cto.com/13563504/2119299
今日推荐