Message queue rabbitMQ, kafka

message queue exchange learning

I. Overview

Message queues are important middleware in distributed systems, and play an important role in system architectures such as high performance, high availability, and low coupling. Distributed systems can easily implement the following functions with the help of message queue capabilities:

  • Decoupling separates the upstream and downstream of a process, the upstream focuses on producing messages, and the downstream focuses on processing messages.
  • With broadcasting , a message produced upstream can easily be processed by multiple downstream services.
  • Buffering , in response to a sudden increase in traffic, the message queue can act as a buffer to protect downstream services so that they can process messages according to actual consumption capabilities.
  • Asynchronous , the upstream can return immediately after sending the message, and the downstream can process the message asynchronously.
  • Redundancy , keeping historical messages, processing failures or exceptions can be retried or backtracked to prevent loss.

The figure below shows the basic model of the message queue. Those who store data in the message queue are called producers, and those who obtain data from the message queue are called consumers.

生产者 消息3 消息2 消息1 消费者

The above picture shows the overall structure, which involves three types of roles:

1) Producer message producer : responsible for generating and sending messages to Broker;

2) Broker message processing center : responsible for message storage, confirmation, retry, etc., generally it will contain multiple queues;

3) Consumer message consumer : responsible for obtaining messages from Broker and processing them accordingly;

2. Product introduction and installation

RabbitMQ ActiveMQ RocketMQ Kafka
Company/Community Rabbit Apache Ali Apache
Development language Erlang Java Java scala&Java
protocol support AMQP,XMPP,SMTP,STOMP OpenWire,STOMP,REST,XMPP,AMQP custom protocol custom protocol
availability high generally high high
Stand-alone throughput generally Difference high very high
message delay microsecond level Millisecond Millisecond within milliseconds
message reliability high generally high generally

1. rabbitMQ

rabbitMQ official website

RabbitMQ is an open source message queuing system developed using the Erlang language and implemented based on the AMQP protocol. The main features of AMQP are message-oriented, queue, routing (including point-to-point and publish/subscribe), reliability, and security. The AMQP protocol is mostly used in enterprise systems, where the requirements for data consistency, stability, and reliability are high, and the requirements for performance and throughput are second.

The installation reference address is dockerhub

1.1 Installation

Rabbitmq is installed under docker under Windows:

docker run -d --hostname my-rabbit --name my-rabbit -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=123456 -p 5672:5672 -p 15672:15672 rabbitmq:3-management

rabbitMQ parameter description:

  • hostname : configure the host name, used in the cluster, the stand-alone version can also be configured without configuration
  • RABBITMQ_DEFAULT_USER : create user
  • RABBITMQ_DEFAULT_PASS : create user password

Docker parameter description:

  • d : run in the background
  • name : The name of the container generated by the image
  • p : Map the host port

address

rabbitMQ management interface

insert image description here
insert image description here

1.2 Function

Several concepts in rabbitMQ:

  • channel : a tool for operating MQ
  • exchange : route messages to queues
  • queue : cache messages
  • virtual host : virtual host, which is a logical grouping of resources such as queue and exchange

Common message models:

1. Do not use exchange (one message can only be consumed by one consumer)

  • Basic message queue (BasicQueue)

    img

  • Work message queue (WorkQueue)

    img

Second, use the exchange

  • Publish/Subscribe (Publish\Subscribe) is divided into three types according to different types of switches:

    • Fanout Exchange: Broadcasting

      img

    • Direct Exchange: Routing

      img

    • Topic Exchange: topic

      img

1.3 Frequently Asked Questions
  • Message reliability (how to ensure that a sent message is consumed at least once)
  • Delayed message problem (how to achieve delayed delivery of messages)
  • Message accumulation problem (how to solve the problem that millions of messages are accumulated and cannot be consumed in time)
  • High availability issues (how to avoid unavailable issues caused by single-point MQ failures)
1.3.1 Message reliability

The message is sent from the producer to the exchange, then to the queue, and then to the consumer, resulting in the possibility of message loss:

  • Lost on send:
    • The message sent by the producer was not delivered to the exchange
    • The message did not reach the queue after it arrived at the exchange
  • MQ is down, the queue will lose the message
  • The consumer crashes without consuming the message after receiving the message

solution:

  1. Producer Confirmation Mechanism

​ rabbitMQ provides a publisher confirm mechanism to avoid the loss of messages when they are sent to MQ. After the message is sent to MQ, a result will be returned to the sender, indicating whether the message is processed successfully. As a result there are two kinds of requests:

  • publisher-confirm, the sender confirms
    • The message is successfully delivered to the switch and returns ack (acknowledge)
    • The message was not delivered to the exchange, return nack
  • publisher-return, sender receipt
    • The message was delivered to the switch, but it was not routed to the queue, ack was returned, and the reason for the routing failure

When the confirmation mechanism sends a message, it is necessary to set a globally unique id for each message to distinguish different messages and avoid ack conflicts

2) Message persistence

Creating switches and queues on the rabbitmq client is not persistent if Durability is not set to Durable, and messages need to be set to Delivery mode: persistent, otherwise they are not persistent. After mq restarts, switches, queues, and messages will disappear.

3) Consumer message confirmation

rabbitMQ supports the consumer confirmation mechanism, that is, the consumer can send an ack receipt to MQ after processing the message, and MQ will delete the message only after receiving the ack receipt. SpringAMQP allows configuration of three confirmation modes:

  • manual: manual ack, you need to call the api to send ack after the business code ends
  • auto: automatic ack, the listener code is monitored by spring to see if there is an exception, if there is no exception, it will return ack; if an exception is thrown, it will return nack
  • none: close ack, MQ assumes that the consumer will successfully process the message after getting it, so the message will be deleted immediately after delivery

4) Failure retry mechanism

When the consumer has an exception, the message will continue to requeue (re-queue) to the queue, and then resend to the consumer, and then the exception again, requeue again, infinite loop, causing the message processing of mq to soar, bringing unnecessary pressure

1.3.2 Dead letter exchange

When a message in a queue meets one of the following conditions, it can become a dead letter:

  • The consumer uses basic.reject or basic.nack to declare consumption failure, and the requeue parameter of the message is set to false
  • The message is an expired message, no one consumes after timeout
  • The queue messages to be delivered are full, and the earliest messages may become dead letters

If the queue is configured with the dead-letter-exchange attribute and an exchange is specified, the dead letters in the queue will be delivered to this exchange, and this exchange is called a Dead Letter Exchange (DLX for short).

How to bind a dead letter exchange to a queue

  • Set the dead-letter-exchange attribute to the queue and specify an exchange
  • Set the dead-letter-routing-key attribute for the queue, and set the routingkey of the dead-letter switch and the dead-letter queue

TTL, that is, Time-To-Live. If the message TTL in a queue ends and is still consumed, it will become a dead letter. There are two cases of ttl timeout:

  • The queue where the message is located has a survival time set
  • The message itself sets the time to live
1.3.3 Lazy queue
  • message accumulation problem

    When the speed at which producers send messages exceeds the speed at which consumers can process messages, messages in the queue will accumulate until the queue stores messages to the upper limit. The earliest received message may become a dead letter and be discarded. This is the problem of message accumulation.

    There are three ways to solve the message accumulation problem:

    • Add more consumers and increase consumption speed
    • Open the thread pool in the consumer to speed up message processing
    • Expand the queue volume and increase the accumulation on-line
  • lazy queue

    Starting from version 3.6.0 of rabbitMQ, the concept of Lazy Queues, that is, lazy queues, has been added.

    The characteristics of lazy queue are as follows:

    • After receiving the message, store it directly on disk instead of memory
    • Consumers only read from disk and load them into memory when they want to consume messages
    • Support millions of message storage
1.3.4 MQ cluster
  • **Ordinary cluster:** is a distributed cluster that disperses the queue to each node of the cluster, thereby improving the concurrency capability of the entire cluster.
    • Part of the data will be shared between each node in the cluster, including: switch and queue metadata. Does not include messages in the queue
    • When accessing a node in the cluster, if the queue is not on the node, it will be passed from the node where the data is located to the current node and returned
    • If the node where the queue is located goes down, the messages in the queue will be lost
  • **Mirror cluster:** is a master-slave cluster. On the basis of ordinary clusters, a master-slave backup function is added to improve the data availability of the cluster.
    • Switches, queues, and messages in queues will be backed up synchronously between mirror nodes of each mq.
    • The node that creates the queue is called the primary node of the queue, and the other nodes that are backed up to are called the mirror nodes of the queue.
    • A queue's master node may be another queue's mirror node
    • All operations are completed by the master node, and then synchronized to the mirror node
    • After the master goes down, the mirror node will be replaced by the new master

Although the mirror cluster supports middle-slave, the master-slave synchronization is not strongly consistent, and there may be a risk of data loss in some cases. Therefore, after version 3.8 of rabbitMQ, a new function is withdrawn: the arbitration queue replaces the mirror cluster, and the bottom layer adopts the Raft protocol to ensure the data consistency of the master and slave.

2.kafka

kafka Chinese document

Kafka was originally developed by Linkedin. It is a distributed , partition-supporting, replica-based distributed messaging system based on zookeeper coordination. Its biggest feature is that it can process large amounts of data in real time to meet Various demand scenarios: such as hadoop-based batch processing system, low-latency real-time system, storm/Spark streaming processing engine, web/nginx log, access log, message service, etc., written in scala language, contributed by Linkedin in 2010 Gave to the Apache Foundation and became a top open source project.

2.1 Background

Kafka was born to solve linkedin's data pipeline problem. At first linkedin adopted ActiveMQ for data exchange, around 2010, but ActiveMQ was far from meeting linkedin's requirements for data delivery systems, often due to various In order to solve this problem, linkedin decided to develop its own messaging system. At that time, Jay kreps, the chief architect of linkedin, began to organize a team to develop the messaging system.

2.2 Features of Kafka
  • High throughput, low latency : Kafka can process hundreds of thousands of messages per second, and its latency is as low as a few milliseconds
  • Scalability : kafka cluster supports thermal expansion
  • Persistence and reliability : messages are persisted to local disks, and data backup is supported to prevent data loss
  • Fault tolerance : Allow nodes in the cluster to fail (if the number of replicas is n, n-1 nodes are allowed to fail)
  • High concurrency : support thousands of clients to read and write at the same time
2.3 Kafka scenario application
  • Log collection : A company can use Kafka to collect logs of various services, and open them to various consumers through Kafka as a unified interface service, such as hadoop, Hbase, Solr, etc.
  • Message system : decoupling and producers and consumers, caching messages, etc.
  • User activity tracking : Kafka is often used to record various activities of web users or app users, such as browsing the web, searching, clicking and other activities. These activity information is published by each server to the topic of Kafka, and then subscribers subscribe to these topics To do real-time monitoring and analysis, or load it into hadoop or data warehouse for offline analysis and mining.
  • Operational indicators : Kafka is also often used to record operational monitoring data. This includes collecting data from various distributed applications and producing centralized feedback for various operations, such as alarms and reports.
  • Stream processing : such as spark streaming and storm
  • event source
2.4 installation

install zookeeper

docker pull zookeeper
docker run --name zoo -p 2181:2181 -d zookeeper

Install kafka (method 1)

docker pull bitnami/kafka
docker run --name kafka -p 9092:9092  -e KAFKA_ZOOKEEPER_CONNECT=10.30.1.13:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -d  bitnami/kafka

The docker container deployment must specify the following environment variables:

  • **KAFKA_ZOOKEEPER_CONNECT **Specify the address of zookeeper: port.
  • ALLOW_PLAINTEXT_LISTENER allows the use of PLAINTEXT listeners.
  • KAFKA_ADVERTISED_LISTENERS is a list of available addresses pointing to Kafka brokers. Kafka will send them to clients on initial connection. The format is PLAINTEXT://host:port, where port 9092 of the container has been mapped to port 9092 of the host, so the host is specified as localhost, and the test program can be executed on the host to connect to kafka.
  • KAFKA_LISTENERS is a list of addresses that the Kafka broker will listen to for incoming connections. The format is PLAINTEXT://host:port, 0.0.0.0 means to accept all addresses. This variable is set when the previous variable is set.

Use docker-compose cluster deployment (method 2)

docker-compose.yml

version: '2'

services:
  zoo1:
    image: zookeeper
    container_name: zoo
    ports:
      - 2181:2181

  kafka1:
    image: 'bitnami/kafka:latest'
    ports:
      - '9092:9092'
    container_name: kafka1
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=zoo1:2181
      - KAFKA_BROKER_ID=1
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT:///127.0.0.1:9092
    depends_on:
      - zoo1

  kafka2:
    image: 'bitnami/kafka:latest'
    ports:
      - '9093:9092'
    container_name: kafka2
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=zoo1:2181
      - KAFKA_BROKER_ID=2
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT:///127.0.0.1:9093
    depends_on:
      - zoo1

  kafka3:
    image: 'bitnami/kafka:latest'
    ports:
      - '9094:9092'
    container_name: kafka3
    environment:
      - KAFKA_ZOOKEEPER_CONNECT=zoo1:2181
      - KAFKA_BROKER_ID=3
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
      - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT:///127.0.0.1:9094
    depends_on:
      - zoo1

Kafka tool installation

installation address

After the installation is complete, add kafka information, as shown in the operation interface as shown below

insert image description here

After the addition is complete, as shown in the figure below:

insert image description here

2.5 Functions

insert image description here

2.5.1 Several concepts in Kafka
  • broker

    • A kafka cluster consists of multiple brokers, so as to achieve load balancing and fault tolerance
    • Brokers are stateless, they maintain the cluster state through zookeeper
    • A Kafka broker can handle hundreds of thousands of reads and writes per second, and each broker can handle TB messages without affecting performance.
  • zookeeper

    • Zookeeper is used to manage and coordinate brokers, and stores Kafka metadata (for example: how many topics, partitions)
    • The zookeeper service is mainly used to notify producers and consumers that a new broker has joined the Kafka cluster, or a broker that has failed in the Kafka cluster
  • producer

    • The producer is responsible for pushing data to the topic of the broker
  • consumer

    • Consumers are responsible for pulling data from the broker's topic and processing it themselves
  • consumer group (consumer group)

insert image description here

  • Consumer group is a scalable and fault-tolerant consumer mechanism provided by Kafka
  • A consumer group can contain multiple consumers
  • A consumer group has a unique ID (group Id)
  • Consumers in the group consume all partition data of the topic together

Summary of personal understanding:

Generally speaking, message models can be divided into two types, queue and publish-subscribe. The processing method of the queue is that a group of consumers read messages from the server, and only one of the consumers can process a message. In the publish-subscribe model, the message is broadcast to all consumers, and the consumers who receive the message can process the message. Kafka provides a single consumer abstraction for both models: the consumer group. Consumers identify themselves with a consumer group name. A message published on a Topic is delivered to a consumer in this consumer group. If all consumers are in a group, then this becomes the queue model. If all consumers are in different groups, then it becomes a complete publish-subscribe model.

  • Partitions

insert image description here

A partition can only be consumed by one consumer at a time

  • replication

    • Replicas can ensure that data is still available when a service fails
  • topic (topic)

    • A topic is a logical concept for producers to publish data and consumers to pull data
    • Topics in kafka must have identifiers and be unique. There can be any number of topics in kafka, and there is no limit on the number
    • The messages in the topic are structured, generally a topic contains a certain type of message
    • Once a producer sends messages to a topic, those messages cannot be updated (changed)
  • offset (offset)

    • offset records the sequence number of the next message to be sent to a consumer
    • By default kafka stores offset in zookeeper
    • In a partition, messages are stored in an orderly manner, and the consumption of each partition is an incremental id, which is the offset
    • The offset is only meaningful within the partition, and has no meaning between partitions
2.5.2 Four core APIs of Kafka
  • The Producer API (producer API ) allows applications to publish record streams to one or more Kafka topics (topics).
  • The Consumer API (Consumer API ) allows an application to subscribe to one or more topics (topics) and process the resulting data streams recorded on them.
  • **Streams API (Stream API)** allows an application to act as a stream processor, consuming input streams from one or more topics (topics), and producing an output stream to one or more output topics (topics), effectively ground transforms the input stream to an output stream.
  • Connector API (Connector API ) allows building and running kafka topics (topics) to connect to existing applications or data systems to reuse producers or consumers. For example, a connector to a relational database might capture every change to a table.
2.5.3 Kafka producer idempotence
  • idempotence
    • Take http as an example, one or more requests, the response obtained is consistent, in other words, the impact of performing multiple operations is the same as performing one operation
  • Kafka producer idempotency
    • When the producer produces a message, if there is a retry, a message may be sent multiple times. If Kafka is not idempotent, it is possible to save one more identical message in the partition

In order to realize the idempotence of producers, kafka is like the concept of Producer ID (PID) and Sequence Number.

  • PID: Each Producer is assigned a unique PId when it is initialized, and this PID is transparent to users
  • Sequence Number: The message sent to the specified topic partition for each producer (corresponding to PID) should correspond to a Sequence Number that increases from 0

When Kafka's producer produces a message, it will add a pid and sequence number. When sending a message, the pid and sn will be sent together. When kafka receives the message, it will save the message, pid, and sn together. If the ack response fails, the producer Retry, when sending the message again, Kafka will save a message according to the pid/sn (judgment condition: whether the SN sent by the producer is less than or equal to the SN corresponding to the message in the partition)

2.5.4 Producer Partition Write Strategy

The producer writes the message to the topic, and Kafka distributes the data to different partitions according to each different strategy

  • Round robin partition strategy
    • The default strategy, which is also the most used strategy, can ensure that all messages are evenly distributed to one partition to the greatest extent
    • If the key is null when the message is produced, the round robin algorithm is used to evenly distribute the partitions
  • random partition strategy
    • Randomly assign messages to each partition each time. In earlier versions, the default partition strategy is random strategy, which is also to write messages to each partition in a balanced manner, but the follow-up polling strategy performs better, so Rarely use random strategies
  • Allocation strategy by key partition
    • According to the key allocation strategy, there may be data skew. For example, if a key contains a large amount of data, because the key value is the same, all the data will be allocated to a partition, resulting in the number of messages in this partition being much larger than other partitions.
  • custom partition strategy

out of order problem

The polling strategy and the random strategy will cause a problem. The data produced in Kafka is stored out of order, and partitioning by key can achieve orderly storage of data to a certain extent, that is, local order, but this may lead to data Tilting, so in the actual production environment, you have to make a trade-off based on the actual situation.

2.5.5 Consumer rebalance mechanism

The rebalance in Kafka is called rebalancing. It is a mechanism in Kafka to ensure that all consumers under the consumer group reach a consensus and allocate each partition of the subscribed topic.

The timing of rebalance triggering is:

  • The number of consumers in the consumer group changes. For example: a new consumer is added to the consumer group, or a consumer stops.

Adverse effects of rebalance:

  • When rebalance occurs, all consumers under the consumer group will coordinate and participate together. Kafka uses the distribution strategy to achieve the fairest distribution possible.
  • The rebalance process will have a very serious impact on the consumer group. During the rebalance process, all consumers will stop working until the rebalance is completed.
2.5.6 Consumer partition allocation strategy
  • range range allocation strategy
    • The range allocation strategy is Kafka's default allocation strategy, which can ensure that the number of partitions consumed by each consumer is balanced. Note: the range allocation strategy is for each topic. (Assignment rule: number of partitions/number of consumers, indivisible and adding 5/3 means that the first two consumers consume two messages (that is, the 1.2 message is consumed by 1 consumer, and the 3.4 message is consumed by 2 consumers), The third consumer consumes only one message)
  • RoundRobin polling strategy
    • The polling strategy is to sort all the consumers in the consumer group and all the partitions subscribed by the consumers according to the dictionary (sort the topic and the hashcode of the partition), and then assign the partitions to each consumer one by one by polling
  • sticky sticky allocation strategy
    • Partitions are distributed as evenly as possible
    • When rebalance occurs, the allocation of partitions should be kept the same as the previous allocation as much as possible. When rebalance does not occur, the sticky sticky allocation strategy is similar to the roundRobin allocation strategy.

Three, use

1. rabbitMQ

1.1 Basic message queue
1.1.1 Use of official api
public class Recv {
    
    

  private final static String QUEUE_NAME = "hello";

  public static void main(String[] argv) throws Exception {
    
    
    ConnectionFactory factory = new ConnectionFactory();
    factory.setHost("localhost");
    // 建立连接 对应图中的Connetcions模块
    Connection connection = factory.newConnection();
    // 创建通道 对应图中的Channels
    Channel channel = connection.createChannel();
	// 创建队列名 对应图中的Queues
    channel.queueDeclare(QUEUE_NAME, false, false, false, null);
    System.out.println(" [*] Waiting for messages. To exit press CTRL+C");

  }
}
1.1.2 Use of Spring AMQP
  • AMQP

    is a standard for passing business messages between applications or applications. The protocol has nothing to do with language and platform, and is more in line with the requirements of independence in microservices.

  • Spring AMQP

    Based on a set of API specifications defined by the AMQP protocol, templates are provided to send and receive messages. Contains two parts, where spring-amqp is the basic abstraction, and spring-rabbit is the underlying default implementation.

Use maven to import related dependencies:

<dependency>
	<groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>

Write the rabbitMQ connection information in the configuration file:

spring:
  rabbitmq:
    host: 127.0.0.1 # 主机
    port: 5672 # 端口
    virtual-host: / # 虚拟空间
    username: admin # 用户名
    password: 123456 # 密码

Send a message to the queue:

@Service
public class RabbitMQServiceImpl implements IrabbitMQService {
    
    


    @Autowired
    private RabbitTemplate rabbitTemplate;

    @Override
    public void sendMsg(String msg) {
    
    
        rabbitTemplate.convertAndSend("testQueue",msg);
    }
}
1.2 Work message queue

Function: Improve message processing speed and avoid message accumulation. (One message corresponds to multiple consumers, whoever has the best ability will consume the message)

Producer:

  @Override
    public void sengMsg2WorkQueue(String msg) {
    
    
        for (int i = 0; i < 10 ; i++) {
    
    
            System.out.println("工作消息队列,生产者发送消息:"+msg+i);
            rabbitTemplate.convertAndSend("workQueue",msg+i);
        }
    }

consumer:

    /**
     * 模拟两个消费者去消费工作队列中的数据
     * @param msg
     */
    @RabbitListener(queues = "workQueue")
    public void listenWorkQueue(String msg) throws InterruptedException {
    
    
        System.out.println("工作消息队列,消费者1接收消息:"+msg);
        Thread.sleep(200);
    }

    @RabbitListener(queues = "workQueue")
    public void listenWorkQueue2(String msg) throws InterruptedException {
    
    
        System.out.println("工作消息队列,消费者2接收消息:"+msg);
        Thread.sleep(20);
    }

in conclusion:

The news is evenly distributed to each consumer, which does not meet the production demand in a specific environment, and whoever has the strongest consumption ability should consume more news.

Advance:

Add the following configuration information to the configuration file to solve the above problems

listener:
      simple:
        prefetch: 1 # 表示每次只能占用消费一条消息
1.3 Fanout
  • Declare fanoutExchange and queue and bind the queue to be broadcast to the spring container

    package com.gdc.springboottest.config;
    
    import org.springframework.amqp.core.Binding;
    import org.springframework.amqp.core.BindingBuilder;
    import org.springframework.amqp.core.FanoutExchange;
    import org.springframework.amqp.core.Queue;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    
    @Configuration
    public class FanoutConfig {
          
          
    
        // 1.声明交换机
        @Bean
        public FanoutExchange fanoutExchange () {
          
          
            return  new FanoutExchange("fanoutExchange");
        }
        // 2.声明队列
        @Bean
        public Queue queue1 () {
          
          
            return  new Queue("fanoutQueue1");
        }
        // 3.绑定
        @Bean
        public Binding binding1 (FanoutExchange fanoutExchange,Queue queue1) {
          
          
            return BindingBuilder.bind(queue1).to(fanoutExchange);
        }
    
        // 2.声明队列2
        @Bean
        public Queue queue2 () {
          
          
            return  new Queue("fanoutQueue2");
        }
        // 3.绑定
        @Bean
        public Binding binding2 (FanoutExchange fanoutExchange,Queue queue2) {
          
          
            return BindingBuilder.bind(queue2).to(fanoutExchange);
        }
    }
    
    
  • publish message (producer)

     @Override
        public void sengMsg2FanoutExchange(String msg) {
          
          
            rabbitTemplate.convertAndSend("fanoutExchange","",msg);
        }
    
  • Subscribe to messages (consumer)

    /**
         *  监听fanoutExchange
         * @param msg
         */
        @RabbitListener(queues = "fanoutQueue1")
        public void listenFanoutQueue1(String msg) {
          
          
            System.out.println("fanoutQueue1消息队列,消费者接收消息:"+msg);
        }
        @RabbitListener(queues = "fanoutQueue2")
        public void listenFanoutQueue2(String msg) {
          
          
            System.out.println("fanoutQueue2消息队列,消费者接收消息:"+msg);
        }
    
1.4 Direct
  • publish message (producer)

        @Override
        public void sengMsg2DirectExchange(String msg, String routingKey) {
          
          
            rabbitTemplate.convertAndSend("directExchange",routingKey,msg);
        }
    
  • Subscribe to messages (consumer)

        /**
         * 监听directExchange
         * @param msg
         */
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = "directQueue1"),
                exchange = @Exchange(name = "directExchange",type = "direct"),
                key = {
          
          "red","orange"}
        ))
        public void listenDirectQueue1(String msg) {
          
          
            System.out.println("directQueue1消息队列,消费者接收消息:"+msg);
        }
    
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = "directQueue2"),
                exchange = @Exchange(name = "directExchange",type = "direct"),
                key = {
          
          "red","yellow"}
        ))
        public void listenDirectQueue2(String msg) {
          
          
            System.out.println("directQueue2消息队列,消费者接收消息:"+msg);
        }
    
1.5 Topic

TopicExchange is similar to DirectExchange, the difference is that routingKey must be a list of multiple words and separated by .

Queue and Exchange can use wildcards when specifying Bindingkey:

#: Refers to 0 or more words

*: Refers to a word

  • publish message (producer)

        @Override
        public void sengMsg2TopicExchange(String msg, String routingKey) {
          
          
            rabbitTemplate.convertAndSend("topicExchange",routingKey,msg);
        }
    
  • Subscribe to messages (consumer)

        /**
         * topic
         * @param msg
         */
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = "topicQueue1"),
                exchange = @Exchange(name = "topicExchange",type = ExchangeTypes.TOPIC),
                key = "shanghai.#"
        ))
        public void listenTopictQueue1(String msg) {
          
          
            System.out.println("topicQueue1消息队列,消费者接收消息:"+msg);
        }
    
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = "topicQueue2"),
                exchange = @Exchange(name = "topicExchange",type = ExchangeTypes.TOPIC),
                key = "#.songjiang"
        ))
        public void listenTopictQueue2(String msg) {
          
          
            System.out.println("topicQueue2消息队列,消费者接收消息:"+msg);
        }
    
1.6 Common problem solving
1.6.1 Message reliability
  • 1. Spring AMQP implements producer confirmation

    • Add configuration in application.yml
    spring:
      rabbitmq:
        publisher-confirm-type: correlated
        publisher-returns: true
        template:
          mandatory: true
    

    Configuration instructions:

    publish-confirm-type: Enable publisher-confirm, here supports two types:

    1) simple: Synchronously wait for the confirm result until timeout

    2) correlated: asynchronous callback, define ConfirmCallback, MQ will call back this ConfirmCallback when returning the result

    publish-returns: enable the publish-return function, which is also based on the callback mechanism, but defines ReturnCallback

    template.mandatory: Defines the strategy when message routing fails. true, call ReturnCallback, false: discard the message directly.

    • Each RabbitTemplate can only be configured with one ReturnCallback, so it needs to be configured during project startup:
    @Configuration
    public class CommonConfig implements ApplicationContextAware {
          
          
        @Override
        public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
          
          
            // 从容器中获取rabbitTemplate
            RabbitTemplate rabbitTemplate = applicationContext.getBean(RabbitTemplate.class);
            /*rabbitTemplate.setReturnCallback(new RabbitTemplate.ReturnCallback() {
                @Override
                public void returnedMessage(Message message, int i, String s, String s1, String s2) {
    
                }
            });*/
            // 设置ReturCallback
            rabbitTemplate.setReturnCallback((message, replyCode, replyText, exchange, routingKey) -> {
          
          
                System.out.println(message.toString());
                System.out.println(replyCode);
                System.out.println(replyText);
                System.out.println(exchange);
                System.out.println(routingKey);
            });
        }
    }
    
    • producer code
    // 消息id,唯一
            CorrelationData correlationData = new CorrelationData(UUID.randomUUID().toString());
            // 添加callback
            correlationData.getFuture().addCallback(new SuccessCallback<CorrelationData.Confirm>() {
          
          
                @Override
                public void onSuccess(CorrelationData.Confirm confirm) {
          
          
                   if (confirm.isAck()) {
          
          
                       System.out.println("生产者发送消息成功");
                   }else {
          
          
                       System.out.println("nack");
                       System.out.println("消息发送失败");
                       System.out.println("原因:"+confirm.getReason());
                   }
                }
            }, new FailureCallback() {
          
          
                @Override
                public void onFailure(Throwable throwable) {
          
          
                    System.out.println("消息发送异常"+throwable.getMessage());
                }
            });
            rabbitTemplate.convertAndSend(DIRECT_EXCHANGE,"red","hello red",correlationData);
    
    • consumer
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = "directQueue1"),
                exchange = @Exchange(name = "directExchange",type = ExchangeTypes.DIRECT),
                key = {
          
          "red","orange"}
        ))
        public void directMsg1(String msg) {
          
          
            System.out.println("消费者接收消息:"+msg);
        }
        @RabbitListener(bindings = @QueueBinding(
                value = @Queue(name = "directQueue2"),
                exchange = @Exchange(name = "directExchange",type = ExchangeTypes.DIRECT),
                key = {
          
          "red","yellow"}
        ))
        public void directMsg2(String msg) {
          
          
            System.out.println("消费者接收消息:"+msg);
        }
    
  • message persistence

    • In spring-amqp default switches, message queues, and messages are all persistent
  • Consumer Message Confirmation

    • Add (auto, none, manual) to the configuration file
    listener:
          simple:
            acknowledge-mode: auto
    
  • Consumption failure retry mechanism

    • configuration file
    listener:
          simple:
            prefetch: 1
            acknowledge-mode: auto
            retry:
              enabled: true # 开启消费者失败重试
              initial-interval: 1000 # 初始的失败等待时长为1秒
              multiplier: 1 # 下次失败的等待时长倍数,下次等待时长=multipler * last-interval
              max-attempts: 3 #最大重试次数
              stateless: true #true无状态,false有状态。如果业务中包含事务,这里改为false
    
    • After the retry mode is turned on, the number of retries is exhausted. If the message still fails, the MessageRecoverer interface is required to handle it. It contains three different implementations:

      • RejectAndDontRequeueRecoverer: After the retries are exhausted, directly reject and discard the message. This is the default

      • ImmediateRequeueMessageRecoverer: After retries are exhausted, nack is returned, and the message is re-queued

      • RepublishMessageRecoverer: After the retries are exhausted, deliver the failure message to the specified exchange

        • The code is implemented as follows:
        @Component
        public class RabbitConfig {
                  
                  
        
            /**
             * 定义错误信息交换机
             * @return
             */
            @Bean
            public DirectExchange errMsgExchange() {
                  
                  
                return  new DirectExchange("errorExchange");
            }
        
            /**
             * 定义错误信息队列
             * @return
             */
            @Bean
            public Queue errQueue() {
                  
                  
                return new Queue("errQueue");
            }
        
            /**
             * 将队列与交换机相互绑定
             * @return
             */
            @Bean
            public Binding errorBind() {
                  
                  
                return BindingBuilder.bind(errQueue()).to(errMsgExchange()).with("error");
            }
        
            /**
             * 定义republishMessageRecoverer
             *
             */
            @Bean
            public MessageRecoverer republishMessageRecoverer (RabbitTemplate rabbitTemplate) {
                  
                  
                return new RepublishMessageRecoverer(rabbitTemplate,"errorExchange","error");
            }
        }
        

2. kafka

Import kafka dependencies

<dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>

configuration file

spring:
  kafka:
    bootstrap-servers: 127.0.0.1:9092 # kafka集群信息
    producer: # 生产者配置
      retries: 3 # 设置大于0的值,则客户端会将发送失败的记录重新发送
      batch-size: 16384 #16K
      buffer-memory: 33554432 #32M
      acks: 1
      # 指定消息key和消息体的编解码方式
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
  consumer:
      group-id: testGroup # 消费者组
      enable-auto-commit: true # 自动提交
      auto-offset-reset: earliest # 当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

producer

kafkaTemplate.send("testTopic",  "key", msg);

consumer

package com.gdc.springboottest.config;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

@Component
public class KafkaListeners {
    
    

    //kafka的监听器,topic为"testTopic",消费者组为"testGroup"
    @KafkaListener(topics = "testTopic", groupId = "testGroup")
    public void listenKafkaMsg(ConsumerRecord<String, String> record) {
    
    
        String value = record.value();
        System.out.println(value);
        System.out.println(record);
    }
}

Guess you like

Origin blog.csdn.net/succeedcow/article/details/125261663