Foreword
- Three centos
- Version kafka_2.11-2.2.1
- Has been configured in advance zookeeper, configure blog
Three machines were modified configuration file $ {KAFKA_HOME} /config/server.properties
#broker的全局唯一编号,可以随便写但机器之间不能重复,master为0,slave1为1,slave2为2
broker.id=0
#zookeeper列表
zookeeper.connect=master.wsxiot.cn:2181,slave1.wsxiot.cn:2181,slave2.wsxiot.cn:2181/kafka
#日志文件存放位置,默认为tmp目录,重启后就消失
log.dirs=/var/lib/kafka
listeners=PLAINTEXT://{{IP}}:9092
auto.create.topics.enable=false
On each machine, switching the working directory is $ {KAFKA_HOME}, start kafka
bin/kafka-server-start.sh -daemon config/server.properties
On master, switch the working directory to $ {KAFKA_HOME}, create topic
bin/kafka-topics.sh --create --zookeeper localhost:2181/kafka --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
On the master, switching the working directory is $ {KAFKA_HOME}, start producer
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
On the master, a command to open, switching the working directory is $ {KAFKA_HOME}, start consumer
bin/kafka-console-consumer.sh --zookeeper localhost:2181/kafka --topic test --from-beginning
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning