Hadoop完全分布式搭建流程

一、版本

Centos6.9    64位

java version "1.8.0_45" 

zookeeper3.4.6.tar.gz 

hadoop-2.6.0-cdh5.7.0

二、环境准备

如果集群是搭建在阿里云上,则无需做这一步,因为阿里云的私有ip是固定的

如果是搭建在虚拟机上,则需要设置虚拟机的ip地址(3台)

1、设置ip地址

[root@hadoop001 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0 HWADDR=00:0C:29:60:E8:D2

TYPE=Ethernet

UUID=055d1cdb-65d4-406e-b797-f00342d412f7

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO="static"

IPADDR=192.168.137.130

NETMASK=255.255.255.0

GATEWAY=192.168.137.2

DNS1=8.8.8.8

 

执行命令: service network restart 

验证:ifconfig

 

2 .关闭防火墙(3 台)  

执行命:service iptables stop  验证:service iptables status 

3.关闭防火墙的自动运行(3 台)  

执行命令:chkconfig iptables off  验证:chkconfig --list | grep iptables 
4 设置主机名(3 台)  

执行命令 (1)hostname hadoop001    (2)vi /etc/sysconfig/network     NETWORKING=yes HOSTNAME=hadoop001 
 

 

5 ip 与 hostname 绑定(3 台)                        

[root@hadoop001 ~]# cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 
 
192.168.137.130 hadoop001 

192.168.137.131 hadoop002 

192.168.137.132 hadoop003

 
 
 验证:ping hadoop001 
 

 

6、设置3台主机之间免密通信

1.3台机器执行
[root@hadoop001 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
74:78:50:05:7e:c8:bb:2a:f1:45:c4:0a:9c:38:90:dc root@hadoop001
The key's randomart image is:
+--[ RSA 2048]----+
| ..+ o ..ooo.    |
|  o E +  =o.     |
|     . .oo* .    |
|       ..o.o     |
|        S..      |
|      .   ..     |
|       o ..      |
|      . ..       |
|       ..        |
+-----------------+
[root@hadoop001 ~]# cat /root/.ssh/id_rsa.pub>> /root/.ssh/authorized_keys

2.hadoop002 hadoop003传输id_rsa.pub文件到hadoop001
[root@hadoop002 ~]# cd .ssh
[root@hadoop002 .ssh]# ll
total 12
-rw-r--r--. 1 root root  396 Sep  2 21:37 authorized_keys
-rw-------. 1 root root 1675 Sep  2 21:37 id_rsa
-rw-r--r--. 1 root root  396 Sep  2 21:37 id_rsa.pub
[root@hadoop002 .ssh]# scp id_rsa.pub 192.168.137.130:/root/.ssh/id_rsa.pub2
[email protected]'s password: 
id_rsa.pub                                                                                                      100%  396     0.4KB/s   00:00    
[root@hadoop002 .ssh]# 

[root@hadoop003 ~]# cd .ssh
[root@hadoop003 .ssh]# ll
total 12
-rw-r--r--. 1 root root  396 Sep  2 21:37 authorized_keys
-rw-------. 1 root root 1675 Sep  2 21:37 id_rsa
-rw-r--r--. 1 root root  396 Sep  2 21:37 id_rsa.pub
[root@hadoop003 .ssh]# scp id_rsa.pub 192.168.137.130:/root/.ssh/id_rsa.pub3
[email protected]'s password: 
id_rsa.pub                                                                                                      100%  396     0.4KB/s   00:00    
[root@hadoop003 .ssh]# 

3.hadoop001机器 合并id_rsa.pub2、id_rsa.pub3到authorized_keys
[root@hadoop001 ~]# cd .ssh
[root@hadoop001 .ssh]# ll
total 20
-rw-r--r--. 1 root root  396 Sep  2 21:37 authorized_keys
-rw-------. 1 root root 1675 Sep  2 21:37 id_rsa
-rw-r--r--. 1 root root  396 Sep  2 21:37 id_rsa.pub
-rw-r--r--. 1 root root  396 Sep  2 21:42 id_rsa.pub2
-rw-r--r--. 1 root root  396 Sep  2 21:42 id_rsa.pub3
[root@hadoop001 .ssh]# cat id_rsa.pub2 >> authorized_keys
[root@hadoop001 .ssh]# cat id_rsa.pub3 >> authorized_keys
[root@hadoop001 .ssh]# cat  authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2dWIp5gGKuTkH7v0hj6IdldGkK0liMEzwXNnBD1iV9e0T12D2W9B4GnkMkCR3EZCKwfK593KPAr2cC3YADyMPaJn9x83pqOStvOBVUEEUYr9N/RUvkDq+JhmlGiTutSsqYNlu9LpCwNMWc+doANzwoM8xpyVVpl1l4LJdc0ShA8UCl2rJYMJgSal49weD58iSNMHB4tEEbAWzojbdkjfsFgtZTRsbckdV0gzDdW/9FoWYWlhqA4aw/SkxglssJ8B8XLSPZX45IdwhD65sTJUCQWkZYSiEq2MQOVLdB517KY4m0bHPid7NhM20g7oYL3H6271EQJ9tat7sFnpbuYdew== root@hadoop001
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAx+tMmk4tEQon/VZZMkfpmPkHGZ7IJg3wyLMpddAGcluWiT0ldzCBZIBY/qkPzwg9TukIuFQ4uqV9R14xLQjdkte2QKRTpp1NLfmVBkCb6Q/ucOlayrU1mXXXiHqbRhPNLK/7++fL+5iMbqzjyM35OuOAVwX+G8rQ7ALx6AgVOnM1bscI5xM4bpKX/uzDQ6Mo9YAalvrC0PF/jlUvyE9lEDIwGwLtxR+UDkhWSw6ucbAt8LxHXhVabg4mpPBA5M1vKujxDJBXK58QcLlUxy+b3gVTI7Ojrurw7KjHLynC439B8NXY9dcWyztIu3tPtopPg8/N3w/5VrifsQIvnpDEcw== root@hadoop002
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqIZyHmKtxOarZyIcuYU0phVQUAHRvsB4jFffuW3X5G7+7RLApv3KsTNe0niTp6TH6B9/lENVKaZT9ut65mo5gQYIeoZqAlE0yA6NpymUkybfyS3bFS7kx2oO0pszQuOAQwFZZaGV1pdEAPWNFAwtUgsngo9x5wcVPdpSgpnVo/gU6smdbaAK2RWQOpZ8qoBmW5eMxEYuihRVetYlJ+erWxboAVW0O2tvdFBChejY7mt0BRIksahNqUhvQvoYRZbMOKiuBRpgxohI/Fz/FOKNYcRwzEHpZKrijttf62rxRt+YfuVETsZrXvWINPTzp9Dbw8qtt/kBvBFgSZYeWP8IDQ== root@hadoop003


4.将authorized_keys分发到hadoop002、hadoop003机器
[root@hadoop001 .ssh]# scp authorized_keys 192.168.137.131:/root/.ssh/
The authenticity of host '192.168.137.131 (192.168.137.131)' can't be established.
RSA key fingerprint is 76:c7:31:b6:20:56:4b:3e:29:c1:99:9f:fb:c0:9e:b8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.137.131' (RSA) to the list of known hosts.
[email protected]'s password: 
authorized_keys                                                                                                 100% 1188     1.2KB/s   00:00    
[root@hadoop001 .ssh]# scp authorized_keys 192.168.137.132:/root/.ssh/
The authenticity of host '192.168.137.132 (192.168.137.132)' can't be established.
RSA key fingerprint is 09:f6:4a:f1:a0:bd:79:fd:34:e7:75:94:0b:3c:83:5a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.137.132' (RSA) to the list of known hosts.
[email protected]'s password: 
authorized_keys         

5.验证(每台机器上执行下面3条命令,只输入yes,不输入密码,则这3台互相通信了)
ssh root@hadoop001 date
ssh root@hadoop002 date
ssh root@hadoop003 date

7 .安装 JDK 和设置环境变量(3 台) 

(1)执行命令        

[root@hadoop001 ~]# cd /usr/java  

[root@hadoop001 java]# cp /tmp/jdk-8u45-linux-x64.gz  ./  

[root@hadoop001 java]# tar -xzvf jdk-8u45-linux-x64.gz  

(2)vi /etc/profile 增加内容如下:        

export JAVA_HOME=/usr/java/jdk1.8.0_45 
export PATH=$PATH:$JAVA_HOME/bin

(3)执行 source /etc/profile  

(4)验证:java –version 

8.安装 Zookeeper 

[root@hadoop001 ~]# cd /opt/software/ 

[root@hadoop001 software]# tar -xvf zookeeper-3.4.6.tar.gz 

[root@hadoop001 software]# mv zookeeper-3.4.6 zookeeper 

[root@hadoop001 software]# chown -R root:root zookeeper 

修改配置 vi

[root@hadoop001 software]# cd zookeeper/conf  

[root@hadoop001 conf]# cp zoo_sample.cfg zoo.cfg 

[root@hadoop001 conf]# vi zoo.cfg 

在zoo.cfg中将datadir改成

dataDir=/opt/software/zookeeper/data 

在配置文件的末尾加上

server.1=hadoop001:2888:3888

server.2=hadoop002:2888:3888

server.3=hadoop003:2888:3888 

[root@hadoop001 conf]# cd ../

[root@hadoop001 zookeeper]#  mkdir data

[root@hadoop001 zookeeper]# touch data/myid

[root@hadoop001 zookeeper]# echo 1 > data/myid

## hadoop002/003,也修改配置,就如下不同

[root@hadoop001 software]# scp -r  zookeeper hadoop002:/opt/software/

[root@hadoop001 software]# scp -r  zookeeper hadoop003:/opt/software/ 
 
[root@hadoop002 zookeeper]# echo 2 > data/myid

[root@hadoop003 zookeeper]# echo 3 > data/myid 


 ###切记不可 echo 3>data/myid,将>前后空格保留,否则无法将 3 写入 myid 文件 

9.安装Hadoop

下载解压 、修改$HADOOP_HOME/etc/opt/software/hadoop-env.sh 、修改$HADOOP_HOME/etc/hadoop/core-site.xml 、修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml 、修改$HADOOP_HOME/etc/hadoop/yarn-env.sh 、修改$HADOOP_HOEM/etc/hadoop/mapred-site.xml 、修改$HADOOP_HOME/etc/hadoop/yarn-site.xml 、修改 slaves 

10、创建临时文件夹和分发文件夹

[root@hadoop001 hadoop]# mkdir -p /opt/software/hadoop/tmp

[root@hadoop001 hadoop]# chmod -R 777 /opt/software/hadoop/tmp

[root@hadoop001 hadoop]# chown -R root:root /opt/software/hadoop/tmp 
 
[root@hadoop001 hadoop]# scp -r hadoop root@hadoop002:/opt/software

[root@hadoop001 hadoop]# scp -r hadoop root@hadoop003:/opt/software 

11、启动集群

启动 zookeeper 

启动 hadoop(HDFS+YARN) 

a. 格式化前 , 先在 journalnode 节点机器上先启动 JournalNode进程

b.NameNode格式化

c. 同步 NameNode元数据

d. 初始化 ZFCK 

e. 启动 HDFS 分布式存储 系统
 f. 验证 namenode,datanode,zkfc (jps)

启动界面


 
 
 
 
 

猜你喜欢

转载自blog.csdn.net/qq_34341930/article/details/88975371