Interview microservice architecture springCloud, message queue, task scheduling (xxl-job)

Good article reference:  SpringCloud and SpringBoot Microservice Architecture | Detailed Interview Questions and Answers_Wbw Belief's Blog-CSDN Blog_Microservice Architecture Interview Questions

1. What is microservice? 

The traditional monolithic application is divided into multiple services according to the business. Each service is independent of each other and can be deployed separately. The services can call each other through rpc, dubbo, http, etc.

2. Advantages and disadvantages:

advantage:

①Horizontal expansion (adding machines) is possible for frequently used services

②It is convenient for development and avoids code version conflicts

③ Clear business boundaries and decoupling.

shortcoming:

①Increasing operation and maintenance costs, a single application only needs to maintain one machine, and now many machines need to be maintained.

Component nacos registration center

The basic process of service registration : The service party generates an http request through the installed nacos customer service terminal, and registers it on the nacos server (it puts the received request in the BlockingQueue (blocking) queue , and the single thread asynchronously consumes the queue continuously, and puts it in the set collection , so as to achieve registration), the consumer pulls the service list from the naocs server, and the consumer can access the server at this time. If there are multiple providers of the same service (only the port number is different), these service providers will also be registered on the naocs server, but after the customer service end pulls the service list, it will poll through the ribbon to select which service provider . As shown below:

 How does nacos support the highly concurrent read and write "service list" of consumers and servers?

Answer: copyOnWriter mechanism . A service party copies a "service list" and updates it, and then replaces the "service list" in the service registration center. The customer service end always reads the service list of the "service registration center".

The popular understanding is that when we add elements to a container, we do not directly add to the current container, but first copy the current container to create a new container, and then add elements to the new container. After adding the elements, Then point the reference of the original container to the new container. The advantage of this is that we can perform concurrent reads on the CopyOnWrite container without locking, because the current container will not add any elements.

How does the Nacos client get the latest data from the Nacos server in real time:

  • Nacos does not send the latest configuration information of the server to the client by means of push, but the client maintains a long polling task, regularly pulls the changed configuration information, and then pushes the latest data to the Listener the holder.

So what is long polling and what is short polling?

Short polling : Send requests to the server continuously, and the server immediately responds to the client.

Long polling : The server keeps sending requests to the server, the server hangs up the request, and waits until the data change is detected before responding to the client.

HTTP connection header

Short polling, long polling, long connection and short connection in HTTP protocol - Zhang Longhao - Blog Park

If multiple service providers perform services, will there be concurrent coverage of the "service list"?

Answer: No, because there is only one thread in the "registry" responsible for consuming the BlockingQueue queue, which is equivalent to updating one by one, so it will not cause concurrent coverage of the service list.

What will the nacos client do when the heartbeat is sent by the nacos client and the server receives the heartbeat?

 The client maintains a timed task and sends a request to the server with a default interval of 5s. If the time when the server receives this heartbeat - the time when the last heartbeat was received > timeout (default 15s), the service will be set to an unhealthy state. If it is greater than the removal time (default 30s), it will be removed from the service list.

service call

The communication of Spring cloud's service is based on http. Spring cloud has two service invocation methods:

One is ribbon+restTemplate, and the other is feign. ribbon

ribbon is a load balancing client. Feign integrates ribbon by default.

Fegin's principle

The implementation class is generated based on the interface-oriented dynamic proxy method, and the request call is delegated to the dynamic proxy implementation class:

1. Start calling the method.

2. The dynamic agent Target takes over the operation of the method.

3. Contract obtains the MethodHandler list according to the annotation.

4. Execute the MethodHandler related to the Request.

5. The Encoder wraps the Request, executes the corresponding decorator, and records the log.

6. Initiate a request based on the Client. Get the request Response and decode it by Decoder. Execute the MethodHandler related to Response. The final result is returned via the proxy class.

By default, Feign implements the feign.Client interface class through JDK's java.net.HttpURLConnection. Every time a request is sent, a new HttpURLConnection link is created , which is why Feign's performance is poor by default. You can expand this interface to use high-performance HTTP clients based on connection pools such as Apache HttpClient or OkHttp3.



 

Principles of service degradation, circuit breaker, and current limiting

The reason for introducing these concepts is: because of the avalanche of services.

Use components: Sentinel, Hystrix components for service fusing and current limiting

Service avalanche: a service failure causes the entire link service to fail

Solution: service downgrade. (Service circuit breaker is a kind of downgrade)

Service degradation: Service A calls service B. For some reason, service B does not respond or responds slowly. In order to ensure the overall availability of its own service, A does not continue to call the target service and returns directly to quickly release resources. Resume the call if the target service is better. To protect yourself, avoid downtime.

Can be subdivided into:

        ①Service fuse. The number of call failures reaches a certain threshold. trigger fuse.

        ②The switch is degraded.

        ③ Current limiting downgrade.

The difference between downgrade and fuse:

        The downgrade is considered from the overall load, and the circuit breaker is triggered by a specific service.

        Degradation generally starts from the periphery, and a circuit breaker is an accident of an arbitrary service.

        The downgrade is to maintain important business and lose some unimportant business.

Under high concurrency, methods to protect the system: caching, downgrading, and current limiting.

Service current limit: Solution:

① Sliding counter, accept too many (for example, 10) requests per unit time and directly deny access.

② Leaky Bucket Algorithm: Water droplets flow out of the tank at a fixed rate. If the funnel overflows, deny access or degrade service.

                        Disadvantages: If the instantaneous traffic is large, it will overflow (throwing many requests).

③Token Bucket Algorithm: According to 1/QPS to generate a constant rate of token generation speed, the generated tokens are put into the bucket, and the tokens are discarded when the bucket is full. At the same time, every time a new request comes, a token will be eliminated, the request will be processed if the token is obtained, and the request will be rejected if the token is not obtained.

Benefits: Increase or decrease the speed of processing requests according to QPS ( requests per second ).

Can be implemented using Guava's RateLimiter

message queue

My understanding: the essence is buffering

 advantage:

Decoupling: write the message into the message queue, and the system that needs the message subscribes from the message queue, so that system A does not need to make any modifications.

Asynchronous: Some non-essential business logic runs synchronously, which is too time-consuming.

Peak elimination : System A slowly pulls messages from the message queue according to the amount of concurrency that the database can handle. In production, this brief peak backlog is allowed.

Main points of message queue interview questions - Mr.peter - 博客园

What are the disadvantages of using message queues?

System availability decreases and system complexity increases.

How to choose a message queue?

For small and medium-sized software companies, it is recommended to choose RabbitMQ, because the management interface is very convenient to use.

Large software companies choose between rocketMq and kafka according to specific usage.

Kafka is mostly used for log collection.

Must-read article : advanced-java/why-mq.md at main doocs/advanced-java (github.com) 

How to ensure that the message queue is highly available

RabbitMQ has three modes: stand-alone, cluster, mirror mode.

Stand-alone is an introductory exercise

In cluster mode, the queue is only on a certain node, so availability is not guaranteed. (Think of it as a supercomputer)

Only mirroring mode has high availability . (A copy of data is copied on each node, disadvantages: high network usage, cannot be expanded horizontally)

Idea: Multiple clusters synchronize information.

1. RabbitMQ starts instances on multiple servers, one instance per server. When you create a queue, the queue (metadata + specific data) will only fall on one RabbitMQ instance, but each instance in the cluster will synchronize the queue Metadata (metadata: description of real data such as specific location, etc.).
2. If the user is connected to another instance when consuming, the current instance will find the instance where the specific data is located based on the synchronized metadata, and pull the specific data from it for consumption.

The disadvantage of this method is obvious, it does not achieve the so-called distributed, it is just an ordinary cluster. When consuming data, this method either randomly selects an instance to pull data, or permanently connects to the instance where the queue is located to pull data. The former leads to the overhead of pulling data for one instance, and then leads to the bottleneck of single instance performance.

Mirror mode:

Both the metadata and the specific messages inside will exist on all instances. Every time a message is written, the message will be synchronized to the queue of each node.

The advantage of this method is that if any of your nodes goes down, it is fine, and other nodes can still be used normally.

Disadvantages: high overhead.

Article reference: https://segmentfault.com/a/1190000023008259

How to ensure that messages are not repeatedly consumed? (How to ensure the idempotency of the message queue?)

 Under normal circumstances, when consumers consume messages, they will send a confirmation message to the message queue after the consumption is completed. The message queue will know that the message has been consumed and delete the message from the message queue.

What caused the repeated consumption? , because of network transmission and other failures, the confirmation information is not sent to the message queue, causing the message queue to not know that it has already consumed the message, and distribute the message to other consumers again.

How to solve? This question is answered in terms of business scenarios, which can be divided into the following three situations:

(1) For example, it is easy for you to get this message to do the insert operation of the database. Make a unique primary key for this message , then even if there is repeated consumption, it will lead to primary key conflicts and avoid dirty data in the database.

(2) For another example, it is easy for consumers to get this message to do the set operation of redis , and there is no need to solve it, because the result is the same no matter how many times you set, and the set operation is considered an idempotent operation.

(3) If the above two situations are not enough, use a big move. Create a global variable for consumption records . Taking redis as an example, a global id is assigned to a message. As long as the message has been consumed, <id, message> is written into redis in the form of KV. Before consumers start consuming, they can check whether there is a consumption record in redis.

How to ensure the reliable transmission of consumption?

In fact, this reliability transmission, each MQ must be analyzed from three angles:

  • Producers lose data. Solution: transaction mechanism, start the transaction, and roll back if it fails.
  • The message queue lost data. Solution: Generally, enable the persistent disk configuration, save it first, and then send it.
  • Consumers lose data. Solution: Change the automatic confirmation to manual confirmation message

How to ensure the order of messages 

In response to this problem, through a certain algorithm, the messages that need to be kept in order are put into the same message queue (partition in kafka, queue in rabbitMq). Then use only one consumer to consume the queue.

Cons: Reduced throughput.

RabbitMQ practice

  There are 4 types of switches in rabbitMq, and the matching rules of each switch are different 1. Direct direct switch: 
        As the name implies, you can directly specify a routingKey to send messages to a message queue (in fact, it also passes through the switch, but rabbitMq There is a default switch called: AMQP default) 
       Applicable scenarios: 1 producer and 1 consumer 2. Fanout broadcast type switch: 
A message sent to the switch will be forwarded to all queues bound to the switch. 
  Fanout switches are the fastest to forward messages. 3. Topic topic type switch. 
Similar to regular expressions, the queues that specify the name of the switch and match the routing key of the matching rule will receive the message sent by the producer. 
  Scenario: Many queues are bound to a switch, which is convenient for management, but wants to happen in it Several queues 4, headers type switches. 
 The headers type switch delivers messages based on the headers of the message instead of the routing key. 
    When binding the queue, you need to specify the parameter Arguments. When sending the message, only the specified headers match the corresponding Arguments when the queue is bound. The message will be delivered correctly 
 Article reference: RabbitMQ four switch types - Nuggets (juejin.cn)
   






rabbitMq has 6 working modes 

1. Installation:

 Official address: Installing on Windows — RabbitMQ

2. Code

rely: 

  <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-amqp</artifactId>
        </dependency>

application.properties file configuration:

spring.rabbitmq.addresses=127.0.0.1
spring.rabbitmq.port=5678
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
#手动确认消息。不设置默认自动确认消息
spring.rabbitmq.listener.simple.acknowledge-mode=manual
#虚拟主机设置 虚拟主机 类似于命名空间, 虚拟主机和虚拟主机之间是互相隔离的,队列和虚拟机 在多个部门直接使用时,便于管理和标识 才使用这个功能
#spring.rabbitmq.virtual-host=my

code:

@Slf4j
@Configuration
public class RabbitMQConfig {

    private static final String topicExchangeName = "topic-exchange";
    private static final String fanoutExchange = "fanout-exchange";
    private static final String headersExchange = "headers-exchange";

    private static final String queueName = "cord";

    //声明队列
    @Bean
    public Queue queue() {
        //Queue(String name, boolean durable, boolean exclusive, boolean autoDelete)
        return new Queue("cord", false, false, false);
    }

    //声明Topic交换机
    @Bean
    TopicExchange topicExchange() {
        return new TopicExchange(topicExchangeName);
    }

    //将队列与Topic交换机进行绑定,并指定路由键
    @Bean
    Binding topicBinding(Queue queue, TopicExchange topicExchange) {
        /*
        模糊匹配符号及其规则
        # :代表匹配一个多或多个、或者一个也匹配不到,支持多级
        * :代表必须匹配一个,且只能是一级
        例如:org.cord.# 能够匹配 org.cord.test 和 org.cord.test.test1
        * */
        return BindingBuilder.bind(queue).to(topicExchange).with("org.cord.#");
    }

    //声明fanout交换机
    @Bean
    FanoutExchange fanoutExchange() {
        return new FanoutExchange(fanoutExchange);
    }

    //将队列与fanout交换机进行绑定
    @Bean
    Binding fanoutBinding(Queue queue, FanoutExchange fanoutExchange) {
        return BindingBuilder.bind(queue).to(fanoutExchange);
    }

    //声明Headers交换机
    @Bean
    HeadersExchange headersExchange() {
        return new HeadersExchange(headersExchange);
    }

    //将队列与headers交换机进行绑定
    @Bean
    Binding headersBinding(Queue queue, HeadersExchange headersExchange) {
        Map<String, Object> map = new HashMap<>();
        map.put("First","A");
        map.put("Fourth","D");
        //whereAny表示部分匹配,whereAll表示全部匹配
//        return BindingBuilder.bind(queue).to(headersExchange).whereAll(map).match();
        return BindingBuilder.bind(queue).to(headersExchange).whereAny(map).match();
    }
}

consumer:

@Component
@RabbitListener(queues = "cord")
public class Consumer {

    @RabbitHandler
    public void processMessage(Channel channel,  String receiveMsg, Message message) throws IOException {
        long deliveryTag = message.getMessageProperties().getDeliveryTag();
        System.out.format("Receiving Message: -----[%s]----- \n.", receiveMsg);

        //如果想要手动确认收到消息,需要配置: 去掉自动签收功能 spring.rabbitmq.listener.simple.acknowledge-mode:manual    (自动是 auto 手动是 manual),否则:报错:Shutdown Signal: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag
        channel.basicAck(deliveryTag, false);
    }
}

Producer:

import org.springframework.amqp.core.AmqpAdmin;
import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.Map;

@Component
public class Producer {

    @Autowired
    private AmqpTemplate template;

    @Autowired
    private AmqpAdmin admin;

    /**
     * @param routingKey 路由关键字
     * @param msg 消息体
     */
    public void sendDirectMsg(String routingKey, String msg) {
        template.convertAndSend(routingKey, msg);
    }

    /**work模式
     * @param routingKey 路由关键字
     * @param msg 消息体
     * @param exchange 交换机
     */
    public void sendExchangeMsg(String exchange, String routingKey, String msg) {
        template.convertAndSend(exchange, routingKey, msg);
    }

    /**
     * @param map 消息headers属性
     * @param exchange 交换机
     * @param msg 消息体
     */
    public void sendHeadersMsg(String exchange, String msg, Map<String, Object> map) {
        template.convertAndSend(exchange, null, msg, message -> {
            message.getMessageProperties().getHeaders().putAll(map);
            return message;
        });
    }
}

 Use the producer:

@SpringBootTest
public class RabbitmqTest {

    @Autowired
    private Producer producer;

//    @Autowired
//    private  RabbitTemplate myRabbitTemplate;

    /*
        Direct 直接根据routingKey发送数据,
    *  简单模式 无需指定交换机, rabbitMq会通过默认的  “AMQP default”交换机,将我们的消息投递到指定的队列,它是一种Direct类型的交换机
    *  适用场景: 一个生产者  写入一个队列,然后一个消费者消费
    * */
    @Test
    public void sendDirectMsg() {
        producer.sendDirectMsg("cord", "hello word");
    }


    /*
    * topic(模糊匹配的routing key)
    * topic交换机可以实现更加复杂的消息发送规则,即发送消息时,指定更为复杂的routing key,类似于模糊匹配,routing key可以多变。
    * 只要发送消息时指定的routing key符合交换机与队列绑定的binding key的匹配规则,则消息可以被正确投递到指定队列。
    * */
    @Test
    public void sendtopicMsg() {
        producer.sendExchangeMsg("topic-exchange","org.cord.test", "hello world topic");
    }

    //Fanout
    @Test
    public void sendFanoutMsg() {
        producer.sendExchangeMsg("fanout-exchange", "abcdefg", String.valueOf(System.currentTimeMillis()));
    }

    //Headers
    @Test
    public void sendHeadersMsg() {
        Map<String, Object> map = new HashMap<>();
        map.put("First","A");
        producer.sendHeadersMsg("headers-exchange", "hello word", map);
    }

Extension: The above code is the default virtual host used in rabbitMq (or only a specific virtual host is used in the project, you can specify: spring.rabbitmq.virtual-host=my ). When multiple specified virtual hosts (namespaces) are used in actual development, customization is required.

 Configuration file:

my.spring.rabbitmq.host=192.168.11.111
my.spring.rabbitmq.port=5678
my.spring.rabbitmq.username=guest
my.spring.rabbitmq.password=guest
my.spring.rabbitmq.virtual-host=jarvis

The configuration class is:

@Slf4j
@Configuration
public class RabbitMQConfig {


    @Bean("myConnectionFactory")
    @Primary
    public CachingConnectionFactory defaultConnectionFactory(@Value("${my.spring.rabbitmq.host}") String host,
                                                             @Value("${my.spring.rabbitmq.port}") int port,
                                                             @Value("${my.spring.rabbitmq.username}") String username,
                                                             @Value("${my.spring.rabbitmq.password}") String password,
                                                             @Value("${my.spring.rabbitmq.virtual-host}") String virtualHost) {
        // 连接工厂
        CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
        cachingConnectionFactory.setHost(host);
        cachingConnectionFactory.setPort(port);
        cachingConnectionFactory.setUsername(username);
        cachingConnectionFactory.setPassword(password);
        cachingConnectionFactory.setVirtualHost(virtualHost);
        cachingConnectionFactory.setPublisherConfirmType(CachingConnectionFactory.ConfirmType.CORRELATED);
        cachingConnectionFactory.setPublisherReturns(true);
        return cachingConnectionFactory;
    }

    @Bean("myContainerFactory")
    @Primary
    public SimpleRabbitListenerContainerFactory defaultContainerFactory(SimpleRabbitListenerContainerFactoryConfigurer configurer,
                                                                        @Qualifier("myConnectionFactory") ConnectionFactory connectionFactory) {
        // 监听容器
        SimpleRabbitListenerContainerFactory listenerContainerFactory  = new SimpleRabbitListenerContainerFactory();
        listenerContainerFactory .setAcknowledgeMode(AcknowledgeMode.MANUAL);

        // 将连接工厂放入兼容容器中,监听对应连接工厂的队列
        configurer.configure(listenerContainerFactory , connectionFactory);
        return listenerContainerFactory ;
    }

    @Bean(name = "myRabbitTemplate")
    @Primary
    public RabbitTemplate defaultRabbitTemplate(@Qualifier("myConnectionFactory") ConnectionFactory connectionFactory) {
        RabbitTemplate rabbitTemplate = new RabbitTemplate();
        rabbitTemplate.setConnectionFactory(connectionFactory);
        // Mandatory为true时,消息通过交换器无法匹配到队列会返回给生产者,为false时匹配不到会直接被丢弃
        rabbitTemplate.setMandatory(true);
        // 消费者确认收到消息后,手动ack回执
        rabbitTemplate.setConfirmCallback(new RabbitTemplate.ConfirmCallback() {
            /**
             *  ConfirmCallback机制只确认消息是否到达exchange(交换器),不保证消息可以路由到正确的queue;
             *  需要设置:publisher-confirm-type: CORRELATED;
             *  springboot版本较低 参数设置改成:publisher-confirms: true
             *
             *  以实现方法confirm中ack属性为标准,true到达
             *  config : 需要开启rabbitmq得ack publisher-confirm-type
             */
            @Override
            public void confirm(CorrelationData correlationData, boolean ack, String cause) {
                log.info("ConfirmCallback  确认结果 (true代表发送成功) : {}  消息唯一标识 : {} 失败原因 :{}",ack,correlationData,cause);
            }
        });


        return rabbitTemplate;
    }

//多个 就多次复制上面的 除了 @Value("${my.spring.rabbitmq.virtual-host}") String virtualHost加载的参数不一样
}

The producer uses the specified AmqpTemplate for injection: use

instead of the original:

 The consumer uses the specified containerFactory. use:

replace

 

 Central idea: If you have several virtual hosts, write a few parameters in the configuration file yourself, and build their corresponding parameters in the configuration class one by one.

CachingConnectionFactory、
SimpleRabbitListenerContainerFactory、
RabbitTemplate 

Learning articles:

For a super detailed introduction to RabbitMQ, just read this article! -Aliyun Developer Community (aliyun.com)

Why does rabbitMq have consumers listening to the queue, and the queue also has messages, but does not consume?

Scenario: The queue blocks a lot of messages, but the queue also shows that there are consumers, but the consumers just don't consume, and some messages are in:

unacked. Such as:

 Cause: The content of the message is malformed:

The received code can be parsed by default: it should be  content_type: application/json. If the first message is texp/plain, even if the format of subsequent messages is application/json, it will be blocked.

Solution: ① Modify  the content_type of the message to application/json

                ②Modify the monitoring of receiving messages:

Use of RabbitMQ: 

Concepts involved:

Switch: data entry selection, switch binding routing key

Routing key: A regular expression matches the name of the sender, and a routing key is bound to a specific message queue.

like:

Message Queues: Where our data is stored.

Through the above: Summary: exchange is the choice of large data entry, and the specific message queue bound is found through routingKey.

Execute this function at this moment, and you can view the sent messages in the rabbitMq center. The following figure is wrong: the routing key bound to the message queue is used as a regular expression to match the routing key of the producer. Purpose: Let the producer find the destination to push message queue

View the data sent up!

Note: If you want to view the data pushed to rabbitMq, it is best to find the queue that has no consumers first, because if there is a consumer, it will be consumed as soon as you send it, and you can't find it!

task scheduling

Taro Spring Boot Timed Task Introduction | Taro Source Code —— Pure Source Code Analysis Blog (iocoder.cn)

characteristic quartz elastic-job-lite xxl-job LTS
rely MySQL、jdk jdk、zookeeper mysql、jdk jdk、zookeeper、maven
high availability Multi-node deployment, by competing for database locks to ensure that only one node executes tasks Through zookeeper registration and discovery, servers can be added dynamically Based on competitive database locks, only one node can perform tasks and support horizontal expansion. You can manually add timed tasks, start and pause tasks, and have monitoring Cluster deployment can dynamically add servers. You can manually add scheduled tasks, start and pause tasks. There is monitoring
Task Sharding ×
management interface ×
Degree of difficulty Simple Simple Simple slightly complicated
Advanced Features - Elastic expansion, multiple job modes, failover, running status collection, multi-threaded data processing, idempotence, fault-tolerant processing, spring namespace support Elastic expansion, shard broadcast, failover, Rolling real-time log, GLUE (supports online code editing, free of publishing), task progress monitoring, task dependency, data encryption, email alarm, running report, internationalization Support spring, spring boot, business log recorder, SPI extension support, failover, node monitoring, support for diversified task execution results, FailStore fault tolerance, and dynamic expansion.
new version update No update for half a year 2 years no update Recently updated 1 year no update

Technology Selection Comparison of Spring Boot Timing Tasks - SegmentFault 思否

Comparison results: XXL-job framework is good: learning articles:

practice:

step 1:

package com.XXX.product.config;

import com.xxl.job.core.executor.impl.XxlJobSpringExecutor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

/**
 * xxl-job config
 *
 * @author xuxueli 2017-04-28
 */
@Configuration
public class XxlJobConfig {
    private final   Logger logger = LoggerFactory.getLogger(XxlJobConfig.class);

    @Value("${xxl.job.admin.addresses}")
    private String adminAddresses;
    @Value("${xxl.job.accessToken:}")
    private String accessToken;
    @Value("${xxl.job.executor.appname}")
    private String appname;
    @Value("${xxl.job.executor.address:}")
    private String address;
    @Value("${xxl.job.executor.ip:}")
    private String ip;
    @Value("${xxl.job.executor.port}")
    private int port;
    @Value("${xxl.job.executor.logpath}")
    private String logPath;
    @Value("${xxl.job.executor.logretentiondays}")
    private int logRetentionDays;

    /**
     *  执行器
     *
     * @return XxlJobSpringExecutor
     */
    @Bean
    public XxlJobSpringExecutor xxlJobExecutor() {
        logger.info(">>>>>>>>>>> xxl-job config init.");
        XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
        xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
        xxlJobSpringExecutor.setAppname(appname);
        xxlJobSpringExecutor.setAddress(address);
        xxlJobSpringExecutor.setIp(ip);
        xxlJobSpringExecutor.setPort(port);
        xxlJobSpringExecutor.setAccessToken(accessToken);
        xxlJobSpringExecutor.setLogPath(logPath);
        xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);
        logger.info(">>>>>>>>>>> xxl-job adminAddresses=" + adminAddresses + " appname=" + appname + " port=" + port);
        return xxlJobSpringExecutor;
    }

}

Step 2: Configuration (There are several field configurations that are not available, but have no effect..)

xxl.job.admin.addresses=https://xxl-job.dev.interfocus11.tech
xxl.job.executor.appname=XXX-product-service
xxl.job.executor.port=10009
xxl.job.executor.logpath=/var/log/java/basic-product-service/xxljob
xxl.job.executor.logretentiondays=30

Step 3: Use: add @XxlJob() annotation to the called function header

 Step 4. Configure in the dispatch center

A comprehensive and detailed explanation of the XXL-JOB distributed scheduling framework, one article is enough! - Nuggets (juejin.cn)

The use of XXL-JOB (detailed tutorial)_fueen's blog-CSDN blog_xxl-job

The difference between kafka and RocketMq 

1. Data reliability. Kafka uses asynchronous disk brushing. Asynchronous Replication RocketMQ supports asynchronous disk brushing, synchronous disk brushing, synchronous Replication, and asynchronous Replication.

2. Strict message order Kafka supports message order, but after a Broker goes down, messages will be out of order. RocketMQ supports strict message order. In the sequential message scenario, after a Broker goes down, sending messages will fail. but not out of order

3. Consumption failure retry mechanism Kafka consumption failure does not support retrying RocketMQ consumption failure supports scheduled retrying, and the interval between each retry is postponed

4. Timing messages Kafka does not support timing messages RocketMQ supports timing messages

5. Distributed transaction messages Kafka does not support distributed transaction messages. Alibaba Cloud ONS supports distributed timing messages. In the future, the open source version of RocketMQ also plans to support distributed transaction messages.

6. Message query mechanism Kafka does not support message query RocketMQ supports querying messages based on Message Id, and also supports querying messages based on message content (specify a Message Key when sending a message, any string, such as an order Id)

7. Message backtracking Kafka can theoretically backtrack messages according to Offset. RocketMQ supports backtracking messages according to time, with millisecond precision, for example, starting to re-consume messages from a certain time, minute, and second a day ago 

Guess you like

Origin blog.csdn.net/u013372493/article/details/119892156