第1节 kafka消息队列:3、4、kafka的安装以及命令行的管理使用

6、kafka的安装

5.1三台机器安装zookeeper

注意:安装zookeeper之前一定要确保三台机器时钟同步

*/1 * * * * /usr/sbin/ntpdate us.pool.ntp.org;

三台机器配置文件修改

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/export/servers/zookeeper-3.4.9/zkData/data

dataLogDir=/export/servers/zookeeper-3.4.9/zkData/log

clientPort=2181

autopurge.purgeInterval=1

autopurge.snapRetainCount=3

server.1=node01:2888:3888

server.2=node02:2888:3888

server.3=node03:2888:3888

三台机器分别在/export/servers/zookeeper-3.4.9/zkData/data 目录下添加文件myid,并编辑每个文件中的内容

node01 机器myid内容为1

node02机器myid内容为

node03机器myid内容为3

三台机器启动zookeeper

bin/zkServer.sh  start

5.2 三台机器安装kafka集群

5.2.1 下载kafka安装压缩包

http://archive.apache.org/dist/kafka/

5.2.2 上传压缩包并解压

这里统一使用  kafka_2.11-1.0.0.tgz 这个版本

5.2.3 修改kafka配置文件

第一台机器修改kafka配置文件server.properties

broker.id=0

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/export/servers/kafka_2.11-1.0.0/logs

num.partitions=2

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.flush.interval.messages=10000

log.flush.interval.ms=1000

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=node01:2181,node02:2181,node03:2181

zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0

delete.topic.enable=true

host.name=node01

第二台机器修改kafka配置文件server.properties

broker.id=1

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/export/servers/kafka_2.11-1.0.0/logs

num.partitions=2

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.flush.interval.messages=10000

log.flush.interval.ms=1000

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=node01:2181,node02:2181,node03:2181

zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0

delete.topic.enable=true

host.name=node02

第三台机器修改kafka配置文件server.properties

broker.id=2

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/export/servers/kafka_2.11-1.0.0/logs

num.partitions=2

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.flush.interval.messages=10000

log.flush.interval.ms=1000

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=node01:2181,node02:2181,node03:2181

zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0

delete.topic.enable=true

host.name=node03

5.2.4启动kafka集群

三台机器启动kafka服务

./kafka-server-start.sh ../config/server.properties

nohup bin/kafka-server-start.sh config/server.properties > /dev/null 2>&1 &    后台启动命令

7、kafka的命令行的管理使用

创建topickafka-topics.sh --create --partitions 3 --replication-factor 2 --topic kafkatopic --zookeeper node01:2181,node02:2181,node03:2181模拟生产者kafka-console-producer.sh --broker-list node01:9092,node02:9092,node03:9092 --topic kafkatopic模拟消费

猜你喜欢

转载自www.cnblogs.com/mediocreWorld/p/11210896.html