Apache Kafka (three) - Kakfa CLI Use

1. Topics CLI

1.1   First start zookeeper and kafka

> zookeeper-server-start.sh config/zookeeper.properties

INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)

INFO Expiring session 0x100ab41939d0000, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)

INFO Processed session termination for sessionid: 0x100ab41939d0000 (org.apache.zookeeper.server.PrepRequestProcessor)

INFO Creating new log file: log.1d (org.apache.zookeeper.server.persistence.FileTxnLog)

 

> kafka-server-start.sh config/server.properties

Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)

INFO Cluster ID = D69veaGlS5Ce3aHTsxCHkQ (kafka.server.KafkaServer)

INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)

INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)

INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(ip-10-0-2-70.cn-north-1.compute.internal,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 44 (kafka.zk.KafkaZkClient)

 

Here we can easily understand that, start a Kafka broker, id is 0, the listening port is 9092.

 

1.2. Creating a topic

It should be noted that --replication-factor parameters, for example:

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --topic first_topic --create --partitions 3 --replication-factor 2

This command returns an error:

ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 2 larger than available brokers: 1.

 (Kafka.admin.TopicCommand $)

This error indicates that: the number of replication-factor specified exceeds the number of broker.

Therefore, we use the following command to create a kafka topic:

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --topic first_topic --create --partitions 3 --replication-factor 1

 

Then list kafka topics have been created:

>  kafka-topics.sh --zookeeper 10.0.2.70:2181 --list

first_topic

 

If we need more information on a topic, such as partitions, replication-factors, etc., use --descriebe:

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --topic first_topic --describe

Topic:first_topic       PartitionCount:3        ReplicationFactor:1     Configs:

        Topic: first_topic      Partition: 0    Leader: 0       Replicas: 0     Isr: 0

        Topic: first_topic      Partition: 1    Leader: 0       Replicas: 0     Isr: 0

        Topic: first_topic      Partition: 2    Leader: 0       Replicas: 0     Isr: 0

 

This topic can see there are three partition, id 0,1,2 respectively. Each leader partition is broker 0, replicas are broker 0, Isr also broker 0 (because the replication-replica 1)

 

Now we create a second topic:

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --topic second_topic --create --partitions 6 --replication-factor 1

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --list

first_topic

second_topic

 

1.3 Delete a Topic :

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --topic second_topic --delete

Topic second_topic is marked for deletion.

Note: This will have no impact if delete.topic.enable is not set to true.

 

You can see, second_topic is labeled deletion. If delete.topic.enable is not set to true, this topic will not be deleted.

> kafka-topics.sh --zookeeper 10.0.2.70:2181 --list

first_topic

 

According to the results list, we can see that second_topic be removed, indicating that delete.topic.enable default is true.

 

2. Produce CLI

According to the descriptive kafka-console-produer.sh, when you use this script, the parameters must be provided --broker-list and -topic, now we execute these two keywords:

> kafka-console-producer.sh --broker-list 10.0.2.70:9092 --topic first_topic

 

Then enter messages:

>hello world

>are you ok?

>learning kafka

>another message :)

Ctrl + C to exit

 

When starting a Producer, its properties may be specified, for example:

> kafka-console-producer.sh --broker-list 10.0.2.70:9092 --topic first_topic --producer-property acks=all

>yep is acked

>hello  ack

>are you ok? acked!

>^C

 

If we specify a topic that does not exist, then what will happen?

> kafka-console-producer.sh --broker-list 10.0.2.70:9092 --topic new_topic

>new topic messages

[2019-08-08 03:37:47,160] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {new_topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

>what about now

>it is ok

>^C

 

It can be seen in the absence of a specified topic after, in the input message, the first time a return WARN, because this is not a topic of this leader. As previously mentioned, producer have a mechanism to automatically recover, it will try to find a leader to send a message. We use the list to look at the results:

> kafka-topics.sh --zookeeper 10.0.2.70 --list

first_topic

new_topic

 

> kafka-topics.sh --zookeeper 10.0.2.70 --topic new_topic --describe

Topic:new_topic PartitionCount:1        ReplicationFactor:1     Configs:

        Topic: new_topic        Partition: 0    Leader: 0       Replicas: 0     Isr: 0

 

You can see automatically new_topic newly created, and the subsequent creation of the default configuration: partition number is 1, the number of replication-factor is also 1. This default setting in server.properties in configuration, such as:

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

num.partitions = 1

 

Recommended always have to create a topic, do not use the default create topic

 

3. Consumer CLI

Kafka-console-consumer.sh by looking at the script, you can see the necessary parameters: - bootstrap-server and --topic. In accordance with the rules to enable a consumer:

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic

 

But we can see that this consumer does not read any data before the producer sent. The reason is that: consumer data only after it will start reading.

At this time, so if we use to send data to first_topic producer, the received data are output in the consumer console.

How to get all the data that is sent before the producer? Use --from-beginning

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic --from-beginning

learning kafka

are you ok? acked!

hello world

another message :)

yep is acked

hi

are you ok?

hello  ack

 

We can see, the order does not output the above message for our input. This is because the only message in the same partition is ordered, and first_topic has three partitions. If a topic is only one partition, then all messages in this topic are ordered.

 

3. Consumers in Group

3.1 Use Group Consumer :

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic --group my-first-app

Using this method, each message can be read to write the producer.

 

But if we start a consumer again, using the same --group my-first-app:

 

The far left is producer, we can see that the first consumer to get a message, then the second consumer get two message, and so on.

 

This is because: consumer group are two in the current consumer, and there are three partition topic, so in this case a consumer the consumer will be responsible for reading in Group 2 of the partition, while the other will be responsible for consumer reads the remaining one partition.

At this time, if a consumer to start again with a consumer group, then each partition corresponds to a consumer, the transmission case 3 message, are read sequentially from the three consumer.

 

3.2. Use --from-beginning

Used for the second consumer group --from-beginning:

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic --group my-second-app --from-beginning

learning kafka

are you ok? acked!

 

This consumer can see a list of all the previous messages. If we execute this command again, you will find will not print any messages.

This is because each group of the offsets will be recorded by the Kafka. So when you use this group to read data again, use the offsets continue reading recorded data.

 

4. Consumer Group CLI

View uses kafka-consumer-groups are:

This tool helps to list all consumer groups, describe a consumer group, delete consumer group info, or reset consumer group offsets.

 

The parameters must be --bootstrap-server

 

First, list all the groups:

> kafka-consumer-groups.sh --bootstrap-server 10.0.2.70:9092 --list

my-first-app

my-first-application

my-second-app

 

View details of a group:

> kafka-consumer-groups.sh --bootstrap-server 10.0.2.70:9092 --describe --group my-first-app

 

Here is the first hit: consumer group 'my-first-app' has no active members. This is because we have stopped all consumers in the consumer group, so this is not a consumer group under active members.

Information took the hit shows each partition, the current offset; log in final offset; and LAG, it represents the number of message is not yet final consumption (that is, the difference cur-offset and log-end-offset of ).

We go down to write a few data my-first-app, then do describe for consumer group:

 

You can find that LAG.

Consumer-group then read this topic:

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic --group my-first-app

help

yep

 

再 describe:

 

We can see LAG is 0, and lists the current consumers of id

 

5. Reset Offset

We see consumer groups of records can be offset kafka, how to reset offset a consumer group that is? use:

> kafka-consumer-groups.sh --bootstrap-server 10.0.2.70:9092 --reset-offsets --group my-first-app --topic first_topic --to-earliest --execute

 

GROUP                          TOPIC                          PARTITION  NEW-OFFSET

my-first-app                   first_topic                    0          0

my-first-app                   first_topic                    2          0

my-first-app                   first_topic                    1          0

 

Use consumer checking:

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic --group my-first-app

learning kafka

are you ok? acked!

 

You can also use the offsets do --shift-by move, instead of resetting:

 

Here we do --shift-by parameters with a positive, it can be found to offset moving backward. If it needs to move forward, you need to use a negative number, for example:
> kafka-consumer-groups.sh --bootstrap-Server 10.0.2.70:9092 --reset-My-First-Qoffsets --group App --topic first_topic - -shift-by -2 --execute

 

GROUP                          TOPIC                          PARTITION  NEW-OFFSET

my-first-app                   first_topic                    0          12

my-first-app                   first_topic                    2          13

my-first-app                   first_topic                    1          13

 

Then use consumer authentication:

> kafka-console-consumer.sh --bootstrap-server 10.0.2.70:9092 --topic first_topic --group my-first-app

help

yep

 

6. Kafka IU

Based on the above commands are command line, using a graphical interface may be configured to access and Kafka, as Kafka Tool:

 

This tool official website at the following address:

http://www.kafkatool.com/

 

Guess you like

Origin www.cnblogs.com/zackstang/p/11334479.html