kafka环境搭建与实战(1)安装kafka

kafka环境搭建与实战(1)安装kafka           http://zilongzilong.iteye.com/blog/2267913
kafka环境搭建与实战(2)kafka API实战       http://zilongzilong.iteye.com/blog/2267924

1.环境介绍

      3台机器IP为:

                      192.168.88.20(hostname=kafka0)

                      192.168.88.21(hostname=kafka1)

                      192.168.88.22(hostname=kafka2) 

2.zookeeper3.4.6集群安装

1) 下载解压zookeeper3.4.6

        下载zookeeper-3.4.6.tar.gz到/opt

        解压tar -zxvf zookeeper-3.4.6.tar.gz

        注意:上面3台机器都要安装

2) 配置/etc/hosts

 

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.88.22 kafka2
192.168.88.21 kafka1
192.168.88.20 kafka0

    注意:上面3台机器都要配置

 

3) 创建zookeeper数据文件

 

sudo rm -r /home/hadoop/zookeeper
cd  /home/hadoop
mkdir zookeeper

 

    注意:上面3台机器都要创建,我环境中是独立创建了hadoop用户,单独用于启动大数据相关服务,这里也是在hadoop用户目录下创建zookeeper配置文件夹

4) 创建zookeeper数据文件

       将/opt/zookeeper-3.3.4/conf目录下面的 zoo_sample.cfg修改为zoo.cfg,配置文件内容如下所示:

 

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/hadoop/zookeeper
clientPort=2181
server.1=kafka0:2888:3888
server.2=kafka1:2888:3888
server.3=kafka2:2888:3888
#数据文件保存最近的3个快照,默认是都保存,时间长的话会占用很大磁盘空间
autopurge.snapRetainCount=3
#单位为小时,每小时清理一次快照数据
autopurge.purgeInterval=1
 

猜你喜欢

转载自aperise.iteye.com/blog/2267913