kafka 安装和环境配置

第一步:kafka下载

下载地址:http://kafka.apache.org/downloads.html

此处下载版本为kafka_2.11-0.11.0.1.tgz (ascmd5)

第二步:解压kafka压缩包

将下载后的kafka压缩包放到Linux中的指定文件夹。此处我的路径为/opt/software

然后进入/opt/software路径,解压文件但路径/opt/module中(此处为我保存kafka解压文件路径)

tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/

第三步:配置相关文件

1、进到路径/opt/module/kafka_2.11-0.11.0.0/ config/中,配置server.properties文件。

需要配置的参数如下:

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true

# A comma seperated list of directories under which to store log files
log.dirs=/opt/module/kafka_2.11-0.11.0.0/logs

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=hadoopStudy:2181,hadoopStudy2:2181,hadoopStudy3:2181

注:此处hadoopStudy1、hadoopStudy2、hadoopStudy3为集群中三个主机名

2、配置KAFKA环境变量(可选)

#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka_2.11-0.11.0.0
export PATH=$PATH:$KAFKA_HOME/bin

第四步:安装包分发并修改broker.id的值

第一台机器上面执行以下两个命令

scp -r /opt/module/kafka_2.11-0.11.0.0/ hadoopStudy2:/opt/module/

scp -r /opt/module/kafka_2.11-0.11.0.0/ hadoopStudy3:/opt/module/


第二台机器上修改broker.id的值为2

cd /opt/module/kafka_2.11-0.11.0.0/config/


vim server.properties 然后修稿broker.id的值此处我将broker.id=2

第三台机器上修改broker.id的值为2

cd /opt/module/kafka_2.11-0.11.0.0/config/


vim server.properties 然后修稿broker.id的值此处我将broker.id=3

第五步:启动kafka

1、启动kafka前首先要启动zookeeper

分别在三台机器上启动zookeeper

/opt/module/zookeeper-3.4.9/bin/zkServer.sh start

注:若没有安装zookeeper首先安装。可以参考一下我的博客https://blog.csdn.net/weixin_42070473/article/details/107203

2、启动kafka

分别在三台机器上启动

opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon config/server.properties

猜你喜欢

转载自blog.csdn.net/weixin_42070473/article/details/107285834