Java -- XXL-JOB Distributed Task Scheduling Platform

XXL-JOB is a distributed task scheduling platform. Its core design goals are rapid development, easy learning, lightweight, and easy expansion. The source code is now open and connected to the online product lines of many companies, out of the box

xxl is the pinyin beginning of the name [Xu Xueli] of Dianping, the developer of xxl-job

Official website address

Distributed task scheduling platform XXL-JOB

document address

Source code warehouse address

Source code warehouse address Release Download
GitHub - xuxueli/xxl-job: A distributed task scheduling framework. (Distributed task scheduling platform XXL-JOB) Download
xxl-job: A distributed task scheduling platform, its core design goals are rapid development, easy learning, lightweight, and easy expansion. The source code is now open and connected to the online product lines of many companies, out of the box. Download

Central warehouse address

<!-- http://repo1.maven.org/maven2/com/xuxueli/xxl-job-core/ -->
<dependency>
    <groupId>com.xuxueli</groupId>
    <artifactId>xxl-job-core</artifactId>
    <version>${最新稳定版本}</version>
</dependency>

1. Overall Design

(1) Source directory introduction

  1. - /doc :文档资料
  2. - /db :“调度数据库”建表脚本
  3. - /xxl-job-admin :调度中心,项目源码
  4. - /xxl-job-core :公共Jar依赖
  5. - /xxl-job-executor-samples :执行器,Sample示例项目(大家可以在该项目上进行开发,也可以将现有项目改造生成执行器项目)

(2) "Scheduling database" configuration

The XXL-JOB scheduling module is based on self-developed scheduling components and supports cluster deployment. The scheduling database table is described as follows:

  1. - xxl_job_lock:任务调度锁表;
  2. - xxl_job_group:执行器信息表,维护任务执行器信息;
  3. - xxl_job_info:调度扩展信息表: 用于保存XXL-JOB调度任务的扩展信息,如任务分组、任务名、机器地址、执行器、执行入参和报警邮件等等;
  4. - xxl_job_log:调度日志表: 用于保存XXL-JOB任务调度的历史信息,如调度结果、执行结果、调度入参、调度机器和执行器等等;
  5. - xxl_job_log_report:调度日志报表:用户存储XXL-JOB任务调度日志的报表,调度中心报表功能页面会用到;
  6. - xxl_job_logglue:任务GLUE日志:用于保存GLUE更新历史,用于支持GLUE的版本回溯功能;
  7. - xxl_job_registry:执行器注册表,维护在线的执行器和调度中心机器地址信息;
  8. - xxl_job_user:系统用户表;

(3) Architecture design

1. Design thinking

The dispatching behavior is abstracted into a public platform of "dispatching center", and the platform itself does not undertake business logic, and the "dispatching center" is responsible for initiating dispatching requests.

The tasks are abstracted into scattered JobHandlers, which are managed by the "executor", and the "executor" is responsible for receiving scheduling requests and executing the business logic in the corresponding JobHandler.

Therefore, the two parts of "scheduling" and "task" can be decoupled from each other to improve the overall stability and scalability of the system;

2. System composition

  • Scheduling module (scheduling center) :
    responsible for managing scheduling information, sending scheduling requests according to scheduling configuration, and not responsible for business codes. The scheduling system is decoupled from the tasks, which improves the availability and stability of the system. At the same time, the performance of the scheduling system is no longer limited by the task modules; it
    supports visual, simple and dynamic management of scheduling information, including task creation, update, deletion, GLUE development and task Alarm, etc., all the above operations will take effect in real time, and support monitoring scheduling results and execution logs, and support executor failover.
  • Execution module (executor) :
    responsible for receiving scheduling requests and executing task logic. The task module focuses on operations such as task execution, making development and maintenance easier and more efficient; it
    receives execution requests, termination requests, and log requests from the "scheduling center".

3. Architecture diagram

4. Analysis of scheduling module

1. The lack of quartz

As the leader in open source job scheduling, Quartz is the first choice for job scheduling. However, in the cluster environment, Quartz uses API to manage tasks, so as to avoid the above problems, but there are also the following problems:

  • Problem 1: The method of calling the API to operate the task is not humanized;
  • Question 2: It is necessary to persist the business QuartzJobBean to the underlying data table, and the system is quite intrusive.
  • Question 3: The scheduling logic and QuartzJobBean are coupled in the same project, which will lead to a problem. When the number of scheduling tasks is gradually increasing, and the scheduling task logic is gradually aggravating, the performance of the scheduling system will be greatly limited by the business at this time;
  • Question 4: The underlying layer of quartz acquires DB locks in a "preemptive manner" and the successful preemption node is responsible for running the task, which will lead to a very large difference in node load; while XXL-JOB realizes the "cooperative distribution" running task through the executor, giving full play to the advantages of the cluster , the load of each node is balanced.

XXL-JOB makes up for the above shortcomings of quartz

2. Quick start

2.1 Initialize the "scheduling database"

Please download the project source code and unzip it, get the "Scheduling Database Initialization SQL Script" and execute it.

The location of "Scheduling Database Initialization SQL Script" is:

  1. /xxl-job/doc/db/tables_xxl_job.sql

The dispatch center supports cluster deployment, and each node must be connected to the same mysql instance in the case of a cluster;

If mysql is the master-slave, the dispatch center cluster node must force the master library;

2.2 Compile the source code

Unzip the source code, import the source code into the IDE according to the maven format, and compile it with maven. The source code structure is as follows:

  1. xxl-job-admin:调度中心
  2. xxl-job-core:公共依赖
  3. xxl-job-executor-samples:执行器Sample示例(选择合适的版本执行器,可直接使用,也可以参考其并将现有项目改造成执行器)
  4. :xxl-job-executor-sample-springboot:Springboot版本,通过Springboot管理执行器,推荐这种方式;
  5. :xxl-job-executor-sample-frameless:无框架版本;

2.3 Configure and deploy the "dispatch center"

  1. 调度中心项目:xxl-job-admin
  2. 作用:统一管理任务调度平台上调度任务,负责触发调度执行,并且提供任务管理平台。

Step 1: Dispatch center configuration:

Address of dispatch center configuration file:

  1. /xxl-job/xxl-job-admin/src/main/resources/application.properties

Description of dispatch center configuration content:

### 调度中心JDBC链接:链接地址请保持和 2.1章节 所创建的调度数据库的地址一致
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/xxl_job?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&serverTimezone=Asia/Shanghai
spring.datasource.username=root
spring.datasource.password=root_pwd
spring.datasource.driver-class-name=com.mysql.jdbc.Driver

### 报警邮箱
spring.mail.host=smtp.qq.com
spring.mail.port=25
[email protected]
spring.mail.password=xxx
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
spring.mail.properties.mail.smtp.starttls.required=true
spring.mail.properties.mail.smtp.socketFactory.class=javax.net.ssl.SSLSocketFactory

### 调度中心通讯TOKEN [选填]:非空时启用;
xxl.job.accessToken=

### 调度中心国际化配置 [必填]: 默认为 "zh_CN"/中文简体, 可选范围为 "zh_CN"/中文简体, "zh_TC"/中文繁体 and "en"/英文;
xxl.job.i18n=zh_CN

## 调度线程池最大线程配置【必填】
xxl.job.triggerpool.fast.max=200
xxl.job.triggerpool.slow.max=100

### 调度中心日志表数据保存天数 [必填]:过期日志自动清理;限制大于等于7时生效,否则, 如-1,关闭自动清理功能;
xxl.job.logretentiondays=30

Step 2: Deploy the project:

If the above configuration has been done correctly, the project can be compiled, packaged and deployed.

Dispatch center access address: http://localhost:8080/xxl-job-admin  (this address will be used by the executor as the callback address)

The default login account is "admin/123456". After login, the running interface is as shown in the figure below.

So far the "dispatch center" project has been deployed successfully.

Step 3: Dispatch center cluster (optional):

The dispatch center supports cluster deployment to improve the disaster recovery and availability of the dispatch system.

When deploying a dispatch center cluster, there are several requirements and suggestions:

  • The DB configuration remains consistent;
  • The clocks of the cluster machines are kept consistent (neglected by the stand-alone cluster);
  • Suggestion: It is recommended to use nginx to do load balancing for the dispatch center cluster and assign domain names. Operations such as scheduling center access, executor callback configuration, and calling API services are all performed through this domain name.

Others: Docker mirroring method to build a dispatch center:

  • download mirror
  1. // Docker地址:https://hub.docker.com/r/xuxueli/xxl-job-admin/ (建议指定版本号)
  2. docker pull xuxueli/xxl-job-admin
  • Create a container and run
docker run -p 8080:8080 -v /tmp:/data/applogs --name xxl-job-admin -d xuxueli/xxl-job-admin:{指定版本}

/**
* 如需自定义 mysql 等配置,可通过 "-e PARAMS" 指定,参数格式 PARAMS="--key=value --key2=value2" ;
* 配置项参考文件:/xxl-job/xxl-job-admin/src/main/resources/application.properties
* 如需自定义 JVM内存参数 等配置,可通过 "-e JAVA_OPTS" 指定,参数格式 JAVA_OPTS="-Xmx512m" ;
*/
docker run -e PARAMS="--spring.datasource.url=jdbc:mysql://127.0.0.1:3306/xxl_job?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&serverTimezone=Asia/Shanghai" -p 8080:8080 -v /tmp:/data/applogs --name xxl-job-admin -d xuxueli/xxl-job-admin:{指定版本}

2.4 Configure and deploy the "executor project"

  1. “执行器”项目:xxl-job-executor-sample-springboot (提供多种版本执行器供选择,现以 springboot 版本为例,可直接使用,也可以参考其并将现有项目改造成执行器)
  2. 作用:负责接收“调度中心”的调度并执行;可直接部署执行器,也可以将执行器集成到现有业务项目中。

Step 1: maven dependency

Confirm that the maven dependency of "xxl-job-core" is introduced into the pom file;

Step 2: Actuator configuration

Actuator configuration, configuration file address:

  1. /xxl-job/xxl-job-executor-samples/xxl-job-executor-sample-springboot/src/main/resources/application.properties

Actuator configuration, configuration description:

### 调度中心部署根地址 [选填]:如调度中心集群部署存在多个地址则用逗号分隔。执行器将会使用该地址进行"执行器心跳注册"和"任务结果回调";为空则关闭自动注册;
xxl.job.admin.addresses=http://127.0.0.1:8080/xxl-job-admin

### 执行器通讯TOKEN [选填]:非空时启用;
xxl.job.accessToken=

### 执行器AppName [选填]:执行器心跳注册分组依据;为空则关闭自动注册
xxl.job.executor.appname=xxl-job-executor-sample
### 执行器注册 [选填]:优先使用该配置作为注册地址,为空时使用内嵌服务 ”IP:PORT“ 作为注册地址。从而更灵活的支持容器类型执行器动态IP和动态映射端口问题。
xxl.job.executor.address=
### 执行器IP [选填]:默认为空表示自动获取IP,多网卡时可手动设置指定IP,该IP不会绑定Host仅作为通讯实用;地址信息用于 "执行器注册" 和 "调度中心请求并触发任务";
xxl.job.executor.ip=
### 执行器端口号 [选填]:小于等于0则自动获取;默认端口为9999,单机部署多个执行器时,注意要配置不同执行器端口;
xxl.job.executor.port=9999
### 执行器运行日志文件存储磁盘路径 [选填] :需要对该路径拥有读写权限;为空则使用默认路径;
xxl.job.executor.logpath=/data/applogs/xxl-job/jobhandler
### 执行器日志文件保存天数 [选填] : 过期日志自动清理, 限制值大于等于3时生效; 否则, 如-1, 关闭自动清理功能;
xxl.job.executor.logretentiondays=30

Step 3: Actuator component configuration

Actuator component, configuration file address:

  1. /xxl-job/xxl-job-executor-samples/xxl-job-executor-sample-springboot/src/main/java/com/xxl/job/executor/core/config/XxlJobConfig.java

Actuator components, configuration description:

@Bean
public XxlJobSpringExecutor xxlJobExecutor() {
    logger.info(">>>>>>>>>>> xxl-job config init.");
    XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
    xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
    xxlJobSpringExecutor.setAppname(appname);
    xxlJobSpringExecutor.setIp(ip);
    xxlJobSpringExecutor.setPort(port);
    xxlJobSpringExecutor.setAccessToken(accessToken);
    xxlJobSpringExecutor.setLogPath(logPath);
    xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);

    return xxlJobSpringExecutor;
}

Step 4: Deploy the executor project:

If the above configurations have been performed correctly, the executor project can be compiled and deployed. The system provides a variety of executor Sample sample projects. Just choose one of them. The respective deployment methods are as follows.

  1. xxl-job-executor-sample-springboot:项目编译打包成springboot类型的可执行JAR包,命令启动即可;
  2. xxl-job-executor-sample-frameless:项目编译打包成JAR包,命令启动即可;

So far the "Actuator" project has been deployed.

Step 5: Executor cluster (optional):

The executor supports cluster deployment, improves the availability of the scheduling system, and improves task processing capabilities.

When deploying executor clusters, there are several requirements and suggestions:

  • The executor callback address (xxl.job.admin.addresses) needs to be consistent; the executor performs operations such as executor automatic registration according to this configuration.
  • The AppName (xxl.job.executor.appname) in the same executor cluster needs to be consistent; the scheduling center dynamically discovers the online executor lists of different clusters according to this configuration.

2.5 Develop the first task "Hello World"

This example takes creating a new "GLUE mode (Java)" task as an example. For more detailed configuration of tasks, please refer to "Chapter 3: Detailed Explanation of Tasks".
(The execution code of "GLUE mode (Java)" is hosted to the dispatch center for online maintenance, which is simpler and lighter than the "Bean mode task" that needs to be developed and deployed in the executor project.)

Prerequisite: Please confirm that the "Scheduling Center" and "Executor" projects have been successfully deployed and started;

Step 1: Create a new task:

Log in to the dispatch center and click the "New Task" button as shown in the figure below to create a new sample task. Then, refer to the parameter configuration of the task in the screenshot below, and click Save.

Step 2: "GLUE mode (Java)" task development:

Please click the "GLUE" button on the right side of the task to enter the "GLUE editor development interface", as shown in the figure below. The task in the "GLUE mode (Java)" running mode has initialized the sample task code by default, that is, printing Hello World.
(The task of "GLUE mode (Java)" operation mode is actually a piece of Java class code inherited from IJobHandler. It runs in the executor project and can use @Resource / @Autowire to inject other services in the executor. Details Please see Chapter 3)

Step 3: Trigger execution:

Please click the "Execute" button on the right side of the task to manually trigger a task execution (usually, the task scheduling is triggered by configuring a Cron expression).

Step 4: Check the log:

Please click the "Log" button on the right side of the task to go to the task log interface to view the task log.
In the task log interface, you can view the historical scheduling records of the task, as well as the task scheduling information, execution parameters and execution information of each scheduling. Click the "Execution Log" button on the right side of the running task to enter the log console to view the real-time execution log.

In the log console, you can view the log information output by the task running on the executor side in real time in Rolling mode, and monitor the progress of the task in real time;

3. Task details

Configuration property details:

基础配置:
- 执行器:任务的绑定的执行器,任务触发调度时将会自动发现注册成功的执行器, 实现任务自动发现功能; 另一方面也可以方便的进行任务分组。每个任务必须绑定一个执行器, 可在 "执行器管理" 进行设置;
- 任务描述:任务的描述信息,便于任务管理;
- 负责人:任务的负责人;
- 报警邮件:任务调度失败时邮件通知的邮箱地址,支持配置多邮箱地址,配置多个邮箱地址时用逗号分隔;

触发配置:
- 调度类型:
无:该类型不会主动触发调度;
CRON:该类型将会通过CRON,触发任务调度;
固定速度:该类型将会以固定速度,触发任务调度;按照固定的间隔时间,周期性触发;
固定延迟:该类型将会以固定延迟,触发任务调度;按照固定的延迟时间,从上次调度结束后开始计算延迟时间,到达延迟时间后触发下次调度;
- CRON:触发任务执行的Cron表达式;
- 固定速度:固定速度的时间间隔,单位为秒;
- 固定延迟:固定延迟的时间间隔,单位为秒;

任务配置:
- 运行模式:
BEAN模式:任务以JobHandler方式维护在执行器端;需要结合 "JobHandler" 属性匹配执行器中任务;
GLUE模式(Java):任务以源码方式维护在调度中心;该模式的任务实际上是一段继承自IJobHandler的Java类代码并 "groovy" 源码方式维护,它在执行器项目中运行,可使用@Resource/@Autowire注入执行器里中的其他服务;
GLUE模式(Shell):任务以源码方式维护在调度中心;该模式的任务实际上是一段 "shell" 脚本;
GLUE模式(Python):任务以源码方式维护在调度中心;该模式的任务实际上是一段 "python" 脚本;
GLUE模式(PHP):任务以源码方式维护在调度中心;该模式的任务实际上是一段 "php" 脚本;
GLUE模式(NodeJS):任务以源码方式维护在调度中心;该模式的任务实际上是一段 "nodejs" 脚本;
GLUE模式(PowerShell):任务以源码方式维护在调度中心;该模式的任务实际上是一段 "PowerShell" 脚本;
- JobHandler:运行模式为 "BEAN模式" 时生效,对应执行器中新开发的JobHandler类“@JobHandler”注解自定义的value值;
- 执行参数:任务执行所需的参数;

高级配置:
- 路由策略:当执行器集群部署时,提供丰富的路由策略,包括;
FIRST(第一个):固定选择第一个机器;
LAST(最后一个):固定选择最后一个机器;
ROUND(轮询):;
RANDOM(随机):随机选择在线的机器;
CONSISTENT_HASH(一致性HASH):每个任务按照Hash算法固定选择某一台机器,且所有任务均匀散列在不同机器上。
LEAST_FREQUENTLY_USED(最不经常使用):使用频率最低的机器优先被选举;
LEAST_RECENTLY_USED(最近最久未使用):最久未使用的机器优先被选举;
FAILOVER(故障转移):按照顺序依次进行心跳检测,第一个心跳检测成功的机器选定为目标执行器并发起调度;
BUSYOVER(忙碌转移):按照顺序依次进行空闲检测,第一个空闲检测成功的机器选定为目标执行器并发起调度;
SHARDING_BROADCAST(分片广播):广播触发对应集群中所有机器执行一次任务,同时系统自动传递分片参数;可根据分片参数开发分片任务;
- 子任务:每个任务都拥有一个唯一的任务ID(任务ID可以从任务列表获取),当本任务执行结束并且执行成功时,将会触发子任务ID所对应的任务的一次主动调度。
- 调度过期策略:
- 忽略:调度过期后,忽略过期的任务,从当前时间开始重新计算下次触发时间;
- 立即执行一次:调度过期后,立即执行一次,并从当前时间开始重新计算下次触发时间;
- 阻塞处理策略:调度过于密集执行器来不及处理时的处理策略;
单机串行(默认):调度请求进入单机执行器后,调度请求进入FIFO队列并以串行方式运行;
丢弃后续调度:调度请求进入单机执行器后,发现执行器存在运行的调度任务,本次请求将会被丢弃并标记为失败;
覆盖之前调度:调度请求进入单机执行器后,发现执行器存在运行的调度任务,将会终止运行中的调度任务并清空队列,然后运行本地调度任务;
- 任务超时时间:支持自定义任务超时时间,任务运行超时将会主动中断任务;
- 失败重试次数;支持自定义任务失败重试次数,当任务失败时将会按照预设的失败重试次数主动进行重试;

3.1 BEAN mode (class form)

Bean mode tasks support class-based development, and each task corresponds to a Java class.

  • Advantages: It does not limit the project environment and has good compatibility. Even frameless projects, such as projects directly started by the main method, can also provide support, you can refer to the sample project "xxl-job-executor-sample-frameless";
  • shortcoming:
    • Each task needs to occupy a Java class, resulting in a waste of classes;
    • Automatic scanning of tasks and injection into the executor container is not supported, manual injection is required.

Step 1: In the executor project, develop the Job class:

  1. 1、开发一个继承自"com.xxl.job.core.handler.IJobHandler"的JobHandler类,实现其中任务方法。
  2. 2、手动通过如下方式注入到执行器容器。
  3. ```
  4. XxlJobExecutor.registJobHandler("demoJobHandler", new DemoJobHandler());
  5. ```

Step 2: Scheduling center, create a new scheduling task

Subsequent steps are consistent with "3.2 BEAN mode (method form)", you can refer to it.

3.2 BEAN mode (method form)

The Bean mode task supports the method-based development method, and each task corresponds to a method.

  • advantage:
    • Each task only needs to develop one method and add the " @XxlJob " annotation, which is more convenient and faster.
    • Supports automatic scanning of tasks and injection into the executor container.
  • Disadvantages: Spring container environment is required;

For method-based tasks, the bottom layer will generate a JobHandler proxy. Like the class-based method, tasks will also exist in the executor task container in the form of JobHandler.

Step 1: In the executor project, develop the Job method:

1、任务开发:在Spring Bean实例中,开发Job方法;
2、注解配置:为Job方法添加注解 "@XxlJob(value="自定义jobhandler名称", init = "JobHandler初始化方法", destroy = "JobHandler销毁方法")",注解value值对应的是调度中心新建任务的JobHandler属性的值。
3、执行日志:需要通过 "XxlJobHelper.log" 打印执行日志;
4、任务结果:默认任务结果为 "成功" 状态,不需要主动设置;如有诉求,比如设置任务结果为失败,可以通过 "XxlJobHelper.handleFail/handleSuccess" 自主设置任务结果;
// 可参考Sample示例执行器中的 "com.xxl.job.executor.service.jobhandler.SampleXxlJob" ,如下:
@XxlJob("demoJobHandler")
public void demoJobHandler() throws Exception {
XxlJobHelper.log("XXL-JOB, Hello World.");
}

Step 2: Scheduling center, create a new scheduling task

Refer to the above "Detailed description of configuration properties" to configure parameters for the newly created task, select "BEAN mode" as the operating mode, and fill in the value defined in the task annotation " @XxlJob " for the JobHandler property;

Native built-in Bean pattern tasks

For the convenience of user reference and quick practicality, the example executor provides multiple Bean mode task handlers natively, which can be directly configured and practical, as follows:

  • demoJobHandler: a simple sample task, which simulates time-consuming task logic inside the task, and users can experience functions such as Rolling Log online;
  • shardingJobHandler: sharding example task, which simulates and processes sharding parameters inside the task, please refer to Familiar with sharding tasks;
  • httpJobHandler: General HTTP task handler; the business side only needs to provide information such as HTTP links, and there are no restrictions on languages ​​and platforms. The input parameters of the sample task are as follows:
    1. url: http://www.xxx.com
    2. method: get 或 post
    3. data: post-data
  • commandJobHandler: general command line task handler; the business side only needs to provide the command line; such as the "pwd" command;

3.3 GLUE mode (Java)

The task is maintained in the dispatch center in the form of source code, supports online update through the Web IDE, and compiles and takes effect in real time, so there is no need to specify a JobHandler. The development process is as follows:

Step 1: In the scheduling center, create a new scheduling task:

Refer to the above "Detailed description of configuration properties" to configure parameters for the newly created task, and select "GLUE mode (Java)" as the operating mode;

Step 2: Develop task code:

Select the specified task, click the "GLUE" button on the right side of the task, and you will go to the Web IDE interface of the GLUE task, which supports the development of the task code (you can also copy and paste it into the editor after the development is completed in the IDE).

Version backtracking function (version backtracking of 30 versions is supported): In the Web IDE interface of the GLUE task, select the drop-down box "Version Backtracking" in the upper right corner, and the update history of the GLUE will be listed. Select the corresponding version to display the version code. After saving, the GLUE code will roll back to the corresponding historical version;

3.4 GLUE Mode (Shell)

Step 1: In the scheduling center, create a new scheduling task

Refer to the above "Detailed Description of Configuration Properties" to configure the parameters of the newly created task, and select "GLUE Mode (Shell)" as the operating mode;

Step 2: Develop task code:

Select the specified task, click the "GLUE" button on the right side of the task, and you will go to the Web IDE interface of the GLUE task, which supports the development of the task code (you can also copy and paste it into the editor after the development is completed in the IDE).

The task of this mode is actually a "shell" script;

3.4 GLUE mode (Python)

Step 1: In the scheduling center, create a new scheduling task

Refer to the "Configuration Properties Detailed Description" above to configure the parameters of the newly created task, and select "GLUE Mode (Python)" as the operating mode;

Step 2: Develop task code:

Select the specified task, click the "GLUE" button on the right side of the task, and you will go to the Web IDE interface of the GLUE task, which supports the development of the task code (you can also copy and paste it into the editor after the development is completed in the IDE).

The task of this mode is actually a "python" script;

3.5 GLUE mode (NodeJS)

Step 1: In the scheduling center, create a new scheduling task

Refer to the "Configuration Properties Detailed Description" above to configure the parameters of the newly created task, and select "GLUE Mode (NodeJS)" as the operating mode;

Step 2: Develop task code:

Select the specified task, click the "GLUE" button on the right side of the task, and you will go to the Web IDE interface of the GLUE task, which supports the development of the task code (you can also copy and paste it into the editor after the development is completed in the IDE).

The task of this mode is actually a "nodeJS" script;

3.6 GLUE mode (PHP)

ditto

3.7 GLUE mode (PowerShell)

ditto

4. Operation Guide

4.1 Configuring the Actuator

Click to enter the "Executor Management" interface, as shown in the figure below:

  1. 1、"调度中心OnLine:"右侧显示在线的"调度中心"列表, 任务执行结束后, 将会以failover的模式进行回调调度中心通知执行结果, 避免回调的单点风险;
  2. 2、"执行器列表" 中显示在线的执行器列表, 可通过"OnLine 机器"查看对应执行器的集群机器。

Click the button "+ Add Actuator" and the pop-up box is as shown in the figure below, you can add an actuator configuration:

Actuator property description

  1. AppName: 是每个执行器集群的唯一标示AppName, 执行器会周期性以AppName为对象进行自动注册。可通过该配置自动发现注册成功的执行器, 供任务调度时使用;
  2. 名称: 执行器的名称, 因为AppName限制字母数字等组成,可读性不强, 名称为了提高执行器的可读性;
  3. 排序: 执行器的排序, 系统中需要执行器的地方,如任务新增, 将会按照该排序读取可用的执行器列表;
  4. 注册方式:调度中心获取执行器地址的方式;
  5. 自动注册:执行器自动进行执行器注册,调度中心通过底层注册表可以动态发现执行器机器地址;
  6. 手动录入:人工手动录入执行器的地址信息,多地址逗号分隔,供调度中心使用;
  7. 机器地址:"注册方式"为"手动录入"时有效,支持人工维护执行器的地址信息;

4.2 Create a new task

Enter the task management interface, click the "Add Task" button, configure the task properties in the pop-up "Add Task" interface and save it. For the details page, refer to the chapter "3. Detailed Task Explanation".

4.3 Editing tasks

Enter the task management interface, select the specified task. Click the "Edit" button on the right side of the task, update the task properties in the pop-up "Edit Task" interface and save it. You can modify the set task property information:

4.4 Edit GLUE code

This operation is only for GLUE tasks.

Select the specified task and click the "GLUE" button on the right side of the task to go to the Web IDE interface of the GLUE task, which supports the development of the task code. Refer to chapter "3.3 GLUE Mode (Java)".

4.5 Start/stop tasks

Tasks can be "started" and "stopped".
It should be noted that the start/stop here is only for the subsequent scheduling trigger behavior of the task, and will not affect the scheduled tasks that have already been triggered. If you need to terminate the scheduled tasks that have been triggered, please refer to "4.9 Terminate Running Tasks"

4.6 Manually trigger a schedule

Click the "Execute" button to manually trigger a task scheduling without affecting the original scheduling rules.

4.7 View scheduling log

Click the "Log" button to view the task history scheduling log. On the history transfer log interface, you can view the scheduling results and execution results of each task scheduling, and click the "Execution Log" button to view the complete log of the executor.

  1. 调度时间:"调度中心"触发本次调度并向"执行器"发送任务执行信号的时间;
  2. 调度结果:"调度中心"触发本次调度的结果,200表示成功,500或其他表示失败;
  3. 调度备注:"调度中心"触发本次调度的日志信息;
  4. 执行器地址:本次任务执行的机器地址
  5. 运行模式:触发调度时任务的运行模式,运行模式可参考章节 "三、任务详解";
  6. 任务参数:本地任务执行的入参
  7. 执行时间:"执行器"中本次任务执行结束后回调的时间;
  8. 执行结果:"执行器"中本次任务执行的结果,200表示成功,500或其他表示失败;
  9. 执行备注:"执行器"中本次任务执行的日志信息;
  10. 操作:
  11. "执行日志"按钮:点击可查看本地任务执行的详细日志信息;详见“4.8 查看执行日志”;
  12. "终止任务"按钮:点击可终止本地调度对应执行器上本任务的执行线程,包括未执行的阻塞任务一并被终止;

4.8 View execution log

Click the "Execution Log" button on the right side of the execution log to jump to the execution log interface, where you can view the complete log printed in the business code, as shown in the figure below;

4.9 Terminate running tasks

Only for executing tasks.
On the task log interface, click the "Terminate Task" button on the right, and a task termination request will be sent to the executor corresponding to this task, and this task will be terminated, and the entire task execution queue will be cleared at the same time.

When the task is terminated, it is implemented by "interrupting" the execution thread, and an "InterruptedException" exception will be triggered. Therefore, if the exception is caught and digested inside the JobHandler, the task termination function will not be available.

Therefore, if the above-mentioned task termination is unavailable, it is necessary to perform special handling (throwing upward) for the "InterruptedException" exception in the JobHandler. The correct logic is as follows:

  1. try{
  2. // do something
  3. } catch (Exception e) {
  4. if (e instanceof InterruptedException) {
  5. throw e;
  6. }
  7. logger.warn("{}", e);
  8. }

Moreover, when the child thread is started in the JobHandler, the child thread cannot catch and handle "InterruptedException", and should actively throw it upward.

When the task is terminated, the "destroy()" method corresponding to the JobHandler will be executed, which can be used to handle some resource recovery logic.

4.10 Delete Execution Log

In the task log interface, after selecting the executor and task, click the "Delete" button on the right, and a "Log Cleanup" pop-up box will appear. The pop-up box supports the selection of different types of log cleanup strategies. After selecting it, click the "OK" button. Can perform log cleanup operations;

4.11 Delete task

Click the Delete button to delete the corresponding task.

4.12 User Management

Enter the "User Management" interface to view and manage user information;

Currently users are divided into two roles:

  • Administrator: has full authority, supports online management of user information, assigns authority to users, and the granularity of authority allocation is executor;
  • Ordinary users: only have the executors assigned permissions, and the operation permissions of related tasks;

Guess you like

Origin blog.csdn.net/MinggeQingchun/article/details/129883009