Some lessons learned to build and use the kafka

A .kafka cluster deployment

   The kafka version is kafka_2.12-2.1.1, the official website can be downloaded to finish downloading decompression, decompression can be used directly finished. kafka divided into two parts, part zookeeper deployment, a kafka-Service deployment. Most of the time needed to deploy zookeeper cluster model, such as 101, 102, three machines, each machine must deploy zookeeper service. But sometimes with limited resources may use pseudo-cluster model, a machine to enable different processes, different listening port, to achieve the same purpose. Modify the configuration file on each machine port 101 217221732174 like. Also kafka service also enable the three processes were used three ports. Such pseudo-cluster has been constructed. Of course that is true clusters are deployed zookeeper and kafka service on three machines. Three deployment configuration as follows:

zookeeper Configuration

dataDir=/home/kafka/kafka_2.12-0.11.0.1/data/zookeeper

# the port at which the clients will connect

clientPort = 2172

# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=1000
server.1=192.168.1.101:2891:2892
server.2=192.168.1.102:2891:2892
server.3=192.168.1.103:2891:289

kafkaserver Configuration

broker.id=1
delete.topic.enable=true
listeners=PLAINTEXT://192.168.1.101:9095
advertised.listeners=PLAINTEXT://192.168.1.101:9095
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dir=/home/kafka/kafka_2.12-0.11.0.1/tmp/kafka-logs
log.dirs=/home/kafka/kafka_2.12-0.11.0.1/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.1.101:2172,192.168.1.102:2172,192.168.1.103:2172
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

Precautions: three service.x need myid under dataDir directory with the same, or else not start. These three are load balancing, and communication port between zookeeper.

Starting method

nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties >logs/zookeeper.log 2>1 &
nohup ./bin/kafka-server-start.sh config/server.properties >logs/kafka.log 2>1 &

Manually add a theme: zookeeper full list of all machines
bin / kafka-topics.sh --create --zookeeper 192.168.1.101:2172,192.168.1.102:2172,192.168.1.103:2172 --replication- factor 2 --partitions 10 --topic test123

List all current theme:
./kafka-topics.sh --list --zookeeper 192.168.1.101:2172

Creating a message producer to produce a manual test kafka is working
./bin/kafka-console-producer.sh --broker-list 192.168.1.101:9095 --topic test123
create consumer news consumer to manually test kafka is working properly
. /bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.101:9095 --from-beginning --topic test123
View kafka set of partitions consumption
kafka-consumer-groups.sh --bootstrap-server 192.168.1.101 : 9092 --describe --group testgroup123
./bin/kafka-consumer-groups.sh --bootstrap-Server 192.168.1.101:9092 --describe --group testgroup123 group name is not a topic name

kafka-topics.sh --describe --zookeeper localhost:9092 --topic notify_op_queue01   

 

Guess you like

Origin www.cnblogs.com/xlsss159/p/11090295.html