Hadoop、Zookeeper完全分布式集群安装部署

集群规划

hadoop1:192.168.76.111

hadoop2:192.168.76.112

hadoop3:192.168.76.113

安装包版本:jdk-1.8、hadoop-2.7.1、zookeeper-3.4.8

对三台虚拟机进行准备工作(三台虚拟机都要做)

修改主机名

hostnamectl set-hostname  主机名

bash

关闭防火墙和防火墙开机自启动

systemctl stop firewalld

systemctl disable firewalld

配置域名映射

vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.76.111 hadoop01
192.168.76.112 hadoop02
192.168.76.113 hadoop03

配置免密登录

ssh-keygen

ssh-copy-id 主机名

不要忘记本机免密登录

上传安装包(仅在hadoop1做)

 创建安装目录

mkdir -p /bigdata/install

以下步骤仅在hadoop1上操作

开始安装jdk

解压java安装包到安装目录,并更改名字

tar -zxvf jdk-1.8-linux-x64.tar.gz -C /bigdata/install

mv jdk1.8.0_144/ jdk
修改环境变量

vim /etc/profile,在最后添加

export JAVA_HOME=/bigdata/install/jdk
export PATH=$PATH:$JAVA_HOME/bin

刷新环境变量         source /etc/profile

查看java版本

将配置文件分发给另外两台虚拟机

scp -r jdk/ hadoop02:/bigdata/install/

scp -r jdk/ hadoop03:/bigdata/install/

scp /etc/profile hadoop02:/etc

scp /etc/profile hadoop03:/etc

 java安装完成

开始安装hadoop

解压hadoop安装包到安装目录,并更改名字

tar -zxvf hadoop-2.7.1.tar.gz -C /bigdata/install
mv hadoop-2.7.1/ hadoop

配置环境变量

vim /etc/profile,在最后添加

export HADOOP_HOME=/bigdata/install/hadoop

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

查看hadoop版本

进入hadoop配置文件目录        cd /bigdata/install/hadoop/etc/hadoop/

修改配置文件

vim core-site.xml

<configuration>
<property>
 <name>fs.defaultFS</name>
 <value>hdfs://hadoop01:9000</value>
 </property>
 <property>
 <name>hadoop.tmp.dir</name>
 <value>/bigdata/install/hadoop/hdfs</value>
 </property>
</configuration>

 vim hdfs-site.xml

<configuration>
<property>
 <name>dfs.replication</name>
 <value>3</value>
 </property>
 <property>
 <name>dfs.namenode.secondary.http-address</name>
 <value>hadoop02:9001</value>
</property>
</configuration>

cp mapred-site.xml.template mapred-site.xml
 vim mapred-site.xml

<configuration>
<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 </property>
</configuration>

vim yarn-site.xml

<configuration>
<property>
 <name>yarn.resourcemanager.hostname</name>
 <value>hadoop01</value>
 </property>
 <property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
 </property>
</configuration>

 vim slaves

hadoop01
hadoop02
hadoop03

 vim hadoop-env.sh

# The java implementation to use.
export JAVA_HOME=/bigdata/install/jdk

 将配置文件分发给另外两台虚拟机

scp -r hadoop/ hadoop02:/bigdata/install/

 scp -r hadoop/ hadoop03:/bigdata/install/

scp /etc/profile hadoop02:/etc

scp /etc/profile hadoop03:/etc

刷新环境变量

source /etc/profile(三台都要做)

格式化namenode(仅在master做)

hadoop namenode -format

 出现successfully说明格式化成功

启动集群

start-all.sh

查看进程状态

 登录web界面查看        http://hadoop01's ip :50070

 hadoop安装完成

开始安装zookeeper

解压安装包到安装目录,并更改名字

tar -zxvf zookeeper-3.4.8.tar.gz -C /bigdata/install/

mv zookeeper-3.4.8/ zookeeper

进入配置文件所在目录。修改配置文件

cd zookeeper/conf/

cp zoo_sample.cfg zoo.cfg
 vim zoo.cfg 

# example sakes.
dataDir=/bigdata/install/zookeeper/zkdata


#autopurge.purgeInterval=1
server.1=192.168.76.111:2888:3888
server.2=192.168.76.112:2888:3888
server.3=192.168.76.113:2888:3888

分发文件给其他两台虚拟机

scp -r zookeeper/ hadoop02:/bigdata/install/
scp -r zookeeper/ hadoop03:/bigdata/install/

创建目录

mkdir /bigdata/install/zookeeper/zkdata

设置myid

hadoop01:echo 1 > /bigdata/install/zookeeper/zkdata/myid

hadoop02:echo 2 > /bigdata/install/zookeeper/zkdata/myid

 hadoop03:echo 3 > /bigdata/install/zookeeper/zkdata/myid

  启动zookeeper并查看节点状态

 

 

 

 zookeeper安装完成

猜你喜欢

转载自blog.csdn.net/qq_53086187/article/details/121067057