python kafka模块操作命令集合

1.安装pykafka

pip install pykafka 

  

2.生产者

from pykafka import KafkaClient

from pykafka import KafkaClient
host = '192.168.20.203:9092,192.168.20.204:9092,192.168.20.205:9092'
client = KafkaClient(hosts=host)
print(client.topics)
topic = client.topics["test_kafka_topic"]

for i in range(10):
print(i)
message = "test message test message" + str(i)
message = bytes(message,encoding='utf-8')
producer = topic.get_producer()
producer.produce(message)

 

3.消费者

from pykafka import KafkaClient
host = '192.168.20.203:9092,192.168.20.204:9092,192.168.20.205:9092'
client = KafkaClient(hosts=host)
topic=client.topics['test_kafka_topic']
balanced_consumer = topic.get_balanced_consumer(consumer_group='test_kafka_topic',auto_commit_enable=True,
zookeeper_connect='192.168.20.201:2181,192.168.20.202:2181,192.168.20.203:2181')
for messgage in balanced_consumer:
print(messgage)
if messgage is not None:
print(messgage.offset)
print(messgage.value)

4.kafka的命令

创建主题

bin/kafka-topics.sh --create --zookeeper 192.168.183.100:2181 --replication-factor 2 --partitions 3 --topic topicnewtest1

查看主题信息

bin/kafka-topics.sh --describe --zookeeper 192.168.183.100:2181 --topic topicnewtest1

查看kafka中已经创建的主题列表

bin/kafka-topics.sh --list --zookeeper 192.168.183.100:2181

删除主题

bin/kafka-topics.sh --delete --zookeeper 192.168.183.100:2181 --topic topictest1 

查看主题中的消息 

 bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group test_kafka_topic --topic test_kafka_topic --zookeeper 192.168.20.203:2181 

增加分区

bin/kafka-topics.sh --alter --zookeeper 192.168.183.100:2181 --topic topicnewtest1 --partitions 5 

使用kafka自带的生产者客户端脚本

bin/kafka-console-producer.sh --broker-list 192.168.183.102:9092,192.168.183.103:9092 --topic topicnewtest1 

使用kafka自带的消费者客户端脚本

bin/kafka-console-consumer.sh --zookeeper 192.168.183.100:2181 --from-beginning --topic topicnewtest1

  



猜你喜欢

转载自www.cnblogs.com/captainwade/p/10848001.html