Distributed Scheduling XXL-JOB

Distributed Scheduling XXL-JOB

1 Overview

1.1 What is task scheduling

for example:

  • An e-commerce platform needs to issue a batch of coupons every day at 10 am, 3 pm, and 8 pm
  • A banking system needs to send SMS reminders three days before the due date of credit card repayment
  • A financial system needs to settle the financial data of the previous day at 0:10 am every day, statistical summary

The above scenario is the problem that task scheduling needs to solve

Task scheduling is the process of automatically completing specific tasks and performing tasks at a specific time agreed

1.2 Why Distributed Scheduling is Needed

The scheduling function can also be realized by using the annotation @Scheduled provided in Spring

@EnableSchedulingPaste this annotation on the method in the business class, and then paste the annotation on the startup class

@Scheduled(cron = "0/20 * * * * ? ")
 public void doWork(){
    
    
 	//doSomething   
 }

I feel that the annotation provided by Spring can complete the function of task scheduling. It seems that the problem has been solved perfectly. Why do we need to distribute it?

There are mainly the following reasons:

  1. High availability: The stand-alone version of the regular task scheduling can only run on one machine, if the program or system is abnormal, the function will be unavailable.
  2. Prevent repeated execution: In stand-alone mode, there is no problem with scheduled tasks. However, when we deploy multiple services and each service has a scheduled task, if we do not perform reasonable control at the same time, only one scheduled task will be executed at the same time. At this time, the results of scheduled execution may be confused and wrong. up
  3. Stand-alone processing limit: Originally, 10,000 orders needed to be processed within 1 minute, but now 100,000 orders need to be processed within 1 minute; it used to take 1 hour for a statistics, but now it takes 10 minutes for the business side to complete the statistics. You might say that you can also multi-thread, stand-alone multi-process processing. It is true that multi-threaded parallel processing can improve the processing efficiency per unit time, but after all, the capacity of a single machine is limited (mainly CPU, memory and disk), and there will always be cases where a single machine cannot handle it.

1.3 Introduction to XXL-JOB

XXL-Job: It is a distributed task scheduling platform of Dianping. It is a lightweight distributed task scheduling platform. Its core design goals are rapid development, easy learning, lightweight, and easy expansion

Dianping has now connected to XXL-JOB, and the system has dispatched about 1 million times internally, with excellent performance.

At present, many companies have access to xxl-job, including well-known Dianping, JD.com, Uxin Used Car, 360 Finance (360), Lenovo Group (Lenovo), Yixin (NetEase), etc.

Official website address https://www.xuxueli.com/xxl-job/

System Architecture Diagram

insert image description here

design thinking

The dispatching behavior is abstracted into a public platform of "dispatching center", and the platform itself does not undertake business logic, and the "dispatching center" is responsible for initiating dispatching requests.

The tasks are abstracted into scattered JobHandlers, which are managed by the "executor", and the "executor" is responsible for receiving scheduling requests and executing the business logic in the corresponding JobHandler.

Therefore, the two parts of "scheduling" and "task" can be decoupled from each other to improve the overall stability and scalability of the system;

composition

insert image description here

2. Quick Start

2.1 Download source code

Source code download address:

https://github.com/xuxueli/xxl-job

https://gitee.com/xuxueli0323/xxl-job

2.1 Initialize the scheduling database

Please download the project source code and unzip it, get the "Scheduling Database Initialization SQL Script" and execute it.

The location of "Scheduling Database Initialization SQL Script" is:

/xxl-job/doc/db/tables_xxl_job.sql

2.2 Compile the source code

Unzip the source code, import the source code into the IDE according to the maven format, and compile it with maven. The source code structure is as follows:

insert image description here

2.3 Configure deployment dispatch center

2.3.1 Dispatch center configuration

Modify xxl-job-adminthe configuration file of the project application.propertiesand configure the database account password

### web
server.port=8080
server.servlet.context-path=/xxl-job-admin

### actuator
management.server.servlet.context-path=/actuator
management.health.mail.enabled=false

### resources
spring.mvc.servlet.load-on-startup=0
spring.mvc.static-path-pattern=/static/**
spring.resources.static-locations=classpath:/static/

### freemarker
spring.freemarker.templateLoaderPath=classpath:/templates/
spring.freemarker.suffix=.ftl
spring.freemarker.charset=UTF-8
spring.freemarker.request-context-attribute=request
spring.freemarker.settings.number_format=0.##########

### mybatis
mybatis.mapper-locations=classpath:/mybatis-mapper/*Mapper.xml
#mybatis.type-aliases-package=com.xxl.job.admin.core.model

### xxl-job, datasource
spring.datasource.url=jdbc:mysql://192.168.202.200:3306/xxl_job?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&serverTimezone=Asia/Shanghai
spring.datasource.username=root
spring.datasource.password=WolfCode_2017
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

### datasource-pool
spring.datasource.type=com.zaxxer.hikari.HikariDataSource
spring.datasource.hikari.minimum-idle=10
spring.datasource.hikari.maximum-pool-size=30
spring.datasource.hikari.auto-commit=true
spring.datasource.hikari.idle-timeout=30000
spring.datasource.hikari.pool-name=HikariCP
spring.datasource.hikari.max-lifetime=900000
spring.datasource.hikari.connection-timeout=10000
spring.datasource.hikari.connection-test-query=SELECT 1
spring.datasource.hikari.validation-timeout=1000

### xxl-job, email
spring.mail.host=smtp.qq.com
spring.mail.port=25
[email protected]
[email protected]
spring.mail.password=xxx
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
spring.mail.properties.mail.smtp.starttls.required=true
spring.mail.properties.mail.smtp.socketFactory.class=javax.net.ssl.SSLSocketFactory

### xxl-job, access token
xxl.job.accessToken=default_token

### xxl-job, i18n (default is zh_CN, and you can choose "zh_CN", "zh_TC" and "en")
xxl.job.i18n=zh_CN

## xxl-job, triggerpool max size
xxl.job.triggerpool.fast.max=200
xxl.job.triggerpool.slow.max=100

### xxl-job, log retention days
xxl.job.logretentiondays=30

2.3.2 Deployment project

Just run XxlJobAdminApplicationthe program.

Dispatch center access address: http://localhost:8080/xxl-job-admin

The default login account is "admin/123456". After login, the running interface is as shown in the figure below.

insert image description here

So far the "dispatch center" project has been deployed successfully.

2.4 Configure the deployment executor project

2.4.1 Add Maven dependency

Create a SpringBoot project and add the following dependencies:

<dependency>
    <groupId>com.xuxueli</groupId>
    <artifactId>xxl-job-core</artifactId>
    <version>2.3.1</version>
</dependency>

2.4.2 Actuator configuration

Add the following configuration to the configuration file:

### 调度中心部署根地址 [选填]:如调度中心集群部署存在多个地址则用逗号分隔。执行器将会使用该地址进行"执行器心跳注册"和"任务结果回调";为空则关闭自动注册;
xxl.job.admin.addresses=http://127.0.0.1:8080/xxl-job-admin
### 执行器通讯TOKEN [选填]:非空时启用;
xxl.job.accessToken=default_token
### 执行器AppName [选填]:执行器心跳注册分组依据;为空则关闭自动注册
xxl.job.executor.appname=xxl-job-executor-sample
### 执行器注册 [选填]:优先使用该配置作为注册地址,为空时使用内嵌服务 ”IP:PORT“ 作为注册地址。从而更灵活的支持容器类型执行器动态IP和动态映射端口问题。
xxl.job.executor.address=
### 执行器IP [选填]:默认为空表示自动获取IP,多网卡时可手动设置指定IP,该IP不会绑定Host仅作为通讯实用;地址信息用于 "执行器注册" 和 "调度中心请求并触发任务";
xxl.job.executor.ip=127.0.0.1
### 执行器端口号 [选填]:小于等于0则自动获取;默认端口为9999,单机部署多个执行器时,注意要配置不同执行器端口;
xxl.job.executor.port=9999
### 执行器运行日志文件存储磁盘路径 [选填] :需要对该路径拥有读写权限;为空则使用默认路径;
xxl.job.executor.logpath=/data/applogs/xxl-job/jobhandler
### 执行器日志文件保存天数 [选填] : 过期日志自动清理, 限制值大于等于3时生效; 否则, 如-1, 关闭自动清理功能;
xxl.job.executor.logretentiondays=30

2.4.3 Add actuator configuration

Create XxlJobConfiga configuration object:

@Configuration
public class XxlJobConfig {
    
    
    @Value("${xxl.job.admin.addresses}")
    private String adminAddresses;
    @Value("${xxl.job.accessToken}")
    private String accessToken;
    @Value("${xxl.job.executor.appname}")
    private String appname;
    @Value("${xxl.job.executor.address}")
    private String address;
    @Value("${xxl.job.executor.ip}")
    private String ip;
    @Value("${xxl.job.executor.port}")
    private int port;
    @Value("${xxl.job.executor.logpath}")
    private String logPath;
    @Value("${xxl.job.executor.logretentiondays}")
    private int logRetentionDays;

    @Bean
    public XxlJobSpringExecutor xxlJobExecutor() {
    
    
        XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
        xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
        xxlJobSpringExecutor.setAppname(appname);
        xxlJobSpringExecutor.setAddress(address);
        xxlJobSpringExecutor.setIp(ip);
        xxlJobSpringExecutor.setPort(port);
        xxlJobSpringExecutor.setAccessToken(accessToken);
        xxlJobSpringExecutor.setLogPath(logPath);
        xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);
        return xxlJobSpringExecutor;
    }
}

2.4.4 Add task processing class

@XxlJobAdd a task processing class, hand it over to the Spring container management, and paste annotations on the processing method

@Component
public class SimpleXxlJob {
    
    
    @XxlJob("demoJobHandler")
    public void demoJobHandler() throws Exception {
    
    
        System.out.println("执行定时任务,执行时间:"+new Date());
    }
}

2.5 Run the HelloWorld program

2.5.1 Task Configuration & Trigger Execution

Log in to the dispatch center, add a new task in the task management, the configuration content is as follows:

insert image description here
insert image description here

After adding, the interface is as follows:

insert image description here

Then start the scheduled task

insert image description here

2.5.2 View logs

You can see the execution result of the task in the scheduling log of the scheduling center.

insert image description here

The console can also see the execution information of the task.

insert image description here

2.6 GLUE mode (Java)

The task is maintained in the dispatch center in the form of source code, supports online update through the Web IDE, and compiles and takes effect in real time, so there is no need to specify a JobHandler.

(The task of "GLUE mode (Java)" operation mode is actually a piece of Java class code inherited from IJobHandler, which runs in the executor project, and can use @Resource / @Autowire to inject other services in the executor.

Add Service

@Service
public class HelloService {
    
    
    public void methodA(){
    
    
        System.out.println("执行MethodA的方法");
    }
    public void methodB(){
    
    
        System.out.println("执行MethodB的方法");
    }
}

Add task configuration

insert image description here

Online code editing via GLUE IDE

insert image description here


Write the content as follows:

package com.xxl.job.service.handler;

import cn.wolfcode.xxljobdemo.service.HelloService;
import com.xxl.job.core.handler.IJobHandler;
import org.springframework.beans.factory.annotation.Autowired;

public class DemoGlueJobHandler extends IJobHandler {
    
    
    @Autowired
    private HelloService helloService;
    @Override
    public void execute() throws Exception {
    
    
        helloService.methodA();
    }
}

Start and execute the program

2.6 Executor Cluster

2.6.1 Cluster Environment Construction

Set the SpringBoot project to run in IDEA and start multiple clusters

insert image description here

To start two SpringBoot programs, you need to modify the Tomcat port and the actuator port

  • The command line parameters of the Tomcat port 8090 program are as follows:

    -Dserver.port=8090 -Dxxl.job.executor.port=9998
    
  • The command line parameters of the Tomcat port 8090 program are as follows:

    -Dserver.port=8091 -Dxxl.job.executor.port=9999
    

In task management, modify the routing policy to轮询

insert image description here

Restart, we can see that the effect is that the timing task will be executed in polling on these two machines

  • The console log of port 8090 is as follows:

insert image description here

  • The console log of port 8091 is as follows:

insert image description here

2.6.2 Explanation of scheduling routing algorithm

When the executor cluster is deployed, rich routing strategies are provided, including:

  1. FIRST(第一个):固定选择第一个机器

  2. LAST(最后一个):固定选择最后一个机器;

  3. ROUND(轮询):依次的选择在线的机器发起调度

  4. RANDOM(随机):随机选择在线的机器;

  5. CONSISTENT_HASH(一致性HASH):

    每个任务按照Hash算法固定选择某一台机器,且所有任务均匀散列在不同机器上。

  6. LEAST_FREQUENTLY_USED(最不经常使用):使用频率最低的机器优先被选举;

  7. LEAST_RECENTLY_USED(最近最久未使用):最久未使用的机器优先被选举;

  8. FAILOVER(故障转移):按照顺序依次进行心跳检测,第一个心跳检测成功的机器选定为目标执行器并发起调度;

  9. BUSYOVER(忙碌转移):按照顺序依次进行空闲检测,第一个空闲检测成功的机器选定为目标执行器并发起调度;

  10. SHARDING_BROADCAST(分片广播):

    广播触发对应集群中所有机器执行一次任务,同时系统自动传递分片参数;可根据分片参数开发分片任务;

3. Explanation of sharding function

3.1 Case Requirements Explanation

Requirements: We are now realizing the requirement that we need to send blessing messages to all users of the platform during designated holidays.

3.1.1 Initialize data

xxl_job_demo.sqlImport data into the database

3.1.2 Integrating Druid & MyBatis

add dependencies

<!--MyBatis驱动-->
<dependency>
    <groupId>org.mybatis.spring.boot</groupId>
    <artifactId>mybatis-spring-boot-starter</artifactId>
    <version>1.2.0</version>
</dependency>
<!--mysql驱动-->
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
</dependency>
<!--lombok依赖-->
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid</artifactId>
    <version>1.1.10</version>
</dependency>

add configuration

spring.datasource.url=jdbc:mysql://localhost:3306/xxl_job_demo?serverTimezone=GMT%2B8&useUnicode=true&characterEncoding=UTF-8
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.username=root
spring.datasource.password=WolfCode_2017

Add entity class

@Setter@Getter
public class UserMobilePlan {
    
    
    private Long id;//主键
    private String username;//用户名
    private String nickname;//昵称
    private String phone;//手机号码
    private String info;//备注
}

Add Mapper processing class

@Mapper
public interface UserMobilePlanMapper {
    
    
    @Select("select * from t_user_mobile_plan")
    List<UserMobilePlan> selectAll();
}

3.1.3 Realization of business functions

Implementation of the task processing method

@XxlJob("sendMsgHandler")
public void sendMsgHandler() throws Exception{
    
    
    List<UserMobilePlan> userMobilePlans = userMobilePlanMapper.selectAll();
    System.out.println("任务开始时间:"+new Date()+",处理任务数量:"+userMobilePlans.size());
    Long startTime = System.currentTimeMillis();
    userMobilePlans.forEach(item->{
    
    
        try {
    
    
            //模拟发送短信动作
            TimeUnit.MILLISECONDS.sleep(10);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
    });
    System.out.println("任务结束时间:"+new Date());
    System.out.println("任务耗时:"+(System.currentTimeMillis()-startTime)+"毫秒");
}

Task configuration information

insert image description here

3.2 Explanation of the concept of sharding

For example, in our case, there are 2000+ pieces of data. If the form of fragmentation is not adopted, the task will only be executed on one machine. In this case, it will take 20+ seconds to complete the task.

If the form of shard broadcast is adopted, a task scheduling will broadcast and trigger all executors in the corresponding cluster to execute a task, and the system will automatically pass the sharding parameters; the sharding tasks can be developed according to the sharding parameters;

Ways to get fragmentation parameters:

// 可参考Sample示例执行器中的示例任务"ShardingJobHandler"了解试用 
int shardIndex = XxlJobHelper.getShardIndex();
int shardTotal = XxlJobHelper.getShardTotal();

Through these two parameters, we can query and execute separately by means of modulus and remainder, so that the processing speed can be improved.

Previously, it took 20+ seconds for 2000+ pieces of data to be executed on only one machine to complete the task. After fragmentation, two machines can jointly complete 2000+ pieces of data, and each machine can process 1000+ pieces of data. In this case, only 10 + seconds to complete the task

3.3 Case transformation into task sharding

Mapper adds query method

@Mapper
public interface UserMobilePlanMapper {
    
    
    @Select("select * from t_user_mobile_plan where mod(id,#{shardingTotal})=#{shardingIndex}")
    List<UserMobilePlan> selectByMod(@Param("shardingIndex") Integer shardingIndex,@Param("shardingTotal")Integer shardingTotal);
    @Select("select * from t_user_mobile_plan")
    List<UserMobilePlan> selectAll();
}

task class method

@XxlJob("sendMsgShardingHandler")
public void sendMsgShardingHandler() throws Exception{
    
    
    System.out.println("任务开始时间:"+new Date());
    int shardTotal = XxlJobHelper.getShardTotal();
    int shardIndex = XxlJobHelper.getShardIndex();
    List<UserMobilePlan> userMobilePlans = null;
    if(shardTotal==1){
    
    
        //如果没有分片就直接查询所有数据
        userMobilePlans = userMobilePlanMapper.selectAll();
    }else{
    
    
        userMobilePlans = userMobilePlanMapper.selectByMod(shardIndex,shardTotal);
    }
    System.out.println("处理任务数量:"+userMobilePlans.size());
    Long startTime = System.currentTimeMillis();
    userMobilePlans.forEach(item->{
    
    
        try {
    
    
            TimeUnit.MILLISECONDS.sleep(10);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
    });
    System.out.println("任务结束时间:"+new Date());
    System.out.println("任务耗时:"+(System.currentTimeMillis()-startTime)+"毫秒");
}

task settings

insert image description here

Guess you like

Origin blog.csdn.net/qq_45525848/article/details/130816818