Message middleware -kafka (a)

Kafka Introduction

AMPQ

ampq defined

AMPQ (Advanced Message Queuing Protocal), is a link protocol that provides a unified messaging service application layer standard Advanced Message Queuing Protocol.
AMPQ use framework to achieve is: kafka, RabbitMQ

Kafka

kafka supports dynamic capacity expansion (zookeeper components support)
RabbitMQ, ActiveMQ biggest feature: news consumption completion message is deleted
kafka Features: All the consumer will be automatically saved two days

RabbitMQ is written in Erlang using an open source message queue itself supports many protocols: AMQP, XMPP, SMTP, STOMP, because of this, it is very heavyweight, is more suitable for the development of enterprise-class. While achieving Broker architecture, which means that the message is queued in the first queue at the center sent to the client. Routing, load balancing or data persistence have very good support.

kafka principle

Kafka as a cluster can run on multiple servers across multiple data centers.
kafka cluster category storing a record relating to a stream called
for each record consists of a key, a timestamp value and a composition

The term
producer: producer [explain: Objects announced the call topics producer (Kafka topic producer)]
consumer: the consumer, the subscription message and process messages posted seed object called Theme consumers (Consumers)
Topic : label (in fact, can be seen as a queue), [explain: kafka seeds feed the message classification, the message each category is a theme (topic)]
broker: published messages are stored in a set of servers, known as Kafka clusters. Each cluster is a proxy server (Broker). Consumers can subscribe to one or more topics (topic), and data from pulling Broker, so consumption of these messages have been posted.

Four core API
1. Producer API application using the publish messages to one or more topic (theme).
2. The application uses the Consumer API to subscribe to one or more of the topic, the message generation and processing.
3. The application uses the Stream API acts as a flow processor, from the one or more topic message input stream and output stream to produce a plurality of outputs a topic or 1, effectively converting the input stream output stream.
4: Connector API allows the producer to the consumer or to build or run a reusable, topic to connect to an existing application or data system. For example: the connector may be a relational database to capture every change

Theme and log (slightly)
Producer
Consumer

Common Commands

management

## 创建主题(4个分区,2个副本)
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic test

Inquire

## 查询集群描述
bin/kafka-topics.sh --describe --zookeeper
 
## 消费者列表查询
bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --list
 
## 新消费者列表查询(支持0.9版本+)
bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --list
 
## 显示某个消费组的消费详情(仅支持offset存储在zookeeper上的)
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper localhost:2181 --group test
 
## 显示某个消费组的消费详情(支持0.9版本+)
bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --describe --group test-consumer-group

Send and consumption

## 生产者
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
 
## 消费者
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test
 
## 新生产者(支持0.9版本+)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
 
## 新消费者(支持0.9版本+)
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties
 
## 高级点的用法
bin/kafka-simple-consumer-shell.sh --brist localhost:9092 --topic test --partition 0 --offset 1234  --max-messages 10

Balanced leader

bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot	

Pressure measurement command comes kafka

bin/kafka-producer-perf-test.sh --topic test --num-records 100 --record-size 1 --throughput 100  --producer-props bootstrap.servers=localhost:9092

Increase copy
1: Create a rule json

cat > increase-replication-factor.json <<EOF
{"version":1, "partitions":[
{"topic":"__consumer_offsets","partition":0,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":1,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":2,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":3,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":4,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":5,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":6,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":7,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":8,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":9,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":10,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":11,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":12,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":13,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":14,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":15,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":16,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":17,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":18,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":19,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":20,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":21,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":22,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":23,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":24,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":25,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":26,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":27,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":28,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":29,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":30,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":31,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":32,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":33,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":34,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":35,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":36,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":37,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":38,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":39,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":40,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":41,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":42,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":43,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":44,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":45,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":46,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":47,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":48,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":49,"replicas":[0,1]}]
}
EOF

2: Perform

bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --execute

3: Verify

bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.jso

Source: https: //www.cnblogs.com/mlan/p/11084526.html

Published 30 original articles · won praise 8 · views 20000 +

Guess you like

Origin blog.csdn.net/wg22222222/article/details/104822077