Microservices: xxl-job installation (docker), usage and springboot integration [Full version detailed]

Full text table of contents, end of article

1. Introduction

XXL-JOB is a distributed task scheduling platform. Its core design goals are rapid development, easy learning, lightweight, and easy expansion. The source code is now open and connected to the online product lines of many companies, ready to use out of the box.

1.1 The functions and advantages of xxl-job

1.1.1 xxl-job function

Unified management of scheduled scheduling tasks Compared with springboot's @Scheduled, this expression can be modified at will and can face more complex scheduled scheduling scenarios such as 集群, 容错, 分片etc.

1.1.2 xxl-job advantages

1. Simple: supports CRUD operations on tasks through the Web page. The operation is simple and can be started in one minute;
2. Dynamic: supports dynamically modifying task status, starting/stopping tasks, and terminating running tasks, with immediate effect;
3. Scheduling center HA (Central): Scheduling adopts a central design. The "Scheduling Center" develops its own scheduling components and supports cluster deployment, which can ensure the HA of the dispatch center; 4.
Executor HA (Distributed): Distributed execution of tasks, task "executor" Supports cluster deployment and ensures HA task execution;
5. Registration center: The executor will automatically register tasks periodically, and the dispatch center will automatically discover the registered tasks and trigger execution. At the same time, manual entry of the executor address is also supported;
6. Flexible expansion and contraction: Once a new executor machine comes online or goes offline, tasks will be reassigned during the next scheduling;
7. Trigger strategy: Provides rich task trigger strategies. Including: Cron trigger, fixed interval trigger, fixed delay trigger, API (event) trigger, manual trigger, parent-child task trigger; 8.
Scheduling expiration strategy: compensation processing strategy for the dispatch center to miss the scheduling time, including: ignore, immediate compensation trigger Wait once;
9. Blocking processing strategy: The processing strategy when the schedule is too dense and the executor has no time to process it. Strategies include: single-machine serial (default), discarding subsequent scheduling, and overwriting previous scheduling; 10. Task timeout control: supports custom task
timeout Time, the task running timeout will actively interrupt the task;
etc.

1.2 Resource location and usage instructions

1.2.1 Documentation

=> Portal: xxl usage documentation

1.2.2 docker image location

=> Portal: docker image xxl-job 2.4.0

2. Install and configure using xxl-job (two types)

2.0 public operations: sql script (2.2 can also be used without downloading)

In the source code:/xxl-job/doc/db/tables_xxl_job.sql

CREATE database if NOT EXISTS `xxl_job` default character set utf8mb4 collate utf8mb4_general_ci;
use `xxl_job`;

SET NAMES utf8mb4;
CREATE TABLE `xxl_job_info` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `job_group` int(11) NOT NULL COMMENT '执行器主键ID',
  `job_desc` varchar(255) NOT NULL,
  `add_time` datetime DEFAULT NULL,
  `update_time` datetime DEFAULT NULL,
  `author` varchar(64) DEFAULT NULL COMMENT '作者',
  `alarm_email` varchar(255) DEFAULT NULL COMMENT '报警邮件',
  `schedule_type` varchar(50) NOT NULL DEFAULT 'NONE' COMMENT '调度类型',
  `schedule_conf` varchar(128) DEFAULT NULL COMMENT '调度配置,值含义取决于调度类型',
  `misfire_strategy` varchar(50) NOT NULL DEFAULT 'DO_NOTHING' COMMENT '调度过期策略',
  `executor_route_strategy` varchar(50) DEFAULT NULL COMMENT '执行器路由策略',
  `executor_handler` varchar(255) DEFAULT NULL COMMENT '执行器任务handler',
  `executor_param` varchar(512) DEFAULT NULL COMMENT '执行器任务参数',
  `executor_block_strategy` varchar(50) DEFAULT NULL COMMENT '阻塞处理策略',
  `executor_timeout` int(11) NOT NULL DEFAULT '0' COMMENT '任务执行超时时间,单位秒',
  `executor_fail_retry_count` int(11) NOT NULL DEFAULT '0' COMMENT '失败重试次数',
  `glue_type` varchar(50) NOT NULL COMMENT 'GLUE类型',
  `glue_source` mediumtext COMMENT 'GLUE源代码',
  `glue_remark` varchar(128) DEFAULT NULL COMMENT 'GLUE备注',
  `glue_updatetime` datetime DEFAULT NULL COMMENT 'GLUE更新时间',
  `child_jobid` varchar(255) DEFAULT NULL COMMENT '子任务ID,多个逗号分隔',
  `trigger_status` tinyint(4) NOT NULL DEFAULT '0' COMMENT '调度状态:0-停止,1-运行',
  `trigger_last_time` bigint(13) NOT NULL DEFAULT '0' COMMENT '上次调度时间',
  `trigger_next_time` bigint(13) NOT NULL DEFAULT '0' COMMENT '下次调度时间',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `job_group` int(11) NOT NULL COMMENT '执行器主键ID',
  `job_id` int(11) NOT NULL COMMENT '任务,主键ID',
  `executor_address` varchar(255) DEFAULT NULL COMMENT '执行器地址,本次执行的地址',
  `executor_handler` varchar(255) DEFAULT NULL COMMENT '执行器任务handler',
  `executor_param` varchar(512) DEFAULT NULL COMMENT '执行器任务参数',
  `executor_sharding_param` varchar(20) DEFAULT NULL COMMENT '执行器任务分片参数,格式如 1/2',
  `executor_fail_retry_count` int(11) NOT NULL DEFAULT '0' COMMENT '失败重试次数',
  `trigger_time` datetime DEFAULT NULL COMMENT '调度-时间',
  `trigger_code` int(11) NOT NULL COMMENT '调度-结果',
  `trigger_msg` text COMMENT '调度-日志',
  `handle_time` datetime DEFAULT NULL COMMENT '执行-时间',
  `handle_code` int(11) NOT NULL COMMENT '执行-状态',
  `handle_msg` text COMMENT '执行-日志',
  `alarm_status` tinyint(4) NOT NULL DEFAULT '0' COMMENT '告警状态:0-默认、1-无需告警、2-告警成功、3-告警失败',
  PRIMARY KEY (`id`),
  KEY `I_trigger_time` (`trigger_time`),
  KEY `I_handle_code` (`handle_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_log_report` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `trigger_day` datetime DEFAULT NULL COMMENT '调度-时间',
  `running_count` int(11) NOT NULL DEFAULT '0' COMMENT '运行中-日志数量',
  `suc_count` int(11) NOT NULL DEFAULT '0' COMMENT '执行成功-日志数量',
  `fail_count` int(11) NOT NULL DEFAULT '0' COMMENT '执行失败-日志数量',
  `update_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `i_trigger_day` (`trigger_day`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_logglue` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `job_id` int(11) NOT NULL COMMENT '任务,主键ID',
  `glue_type` varchar(50) DEFAULT NULL COMMENT 'GLUE类型',
  `glue_source` mediumtext COMMENT 'GLUE源代码',
  `glue_remark` varchar(128) NOT NULL COMMENT 'GLUE备注',
  `add_time` datetime DEFAULT NULL,
  `update_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_registry` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `registry_group` varchar(50) NOT NULL,
  `registry_key` varchar(255) NOT NULL,
  `registry_value` varchar(255) NOT NULL,
  `update_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `i_g_k_v` (`registry_group`,`registry_key`,`registry_value`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_group` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `app_name` varchar(64) NOT NULL COMMENT '执行器AppName',
  `title` varchar(12) NOT NULL COMMENT '执行器名称',
  `address_type` tinyint(4) NOT NULL DEFAULT '0' COMMENT '执行器地址类型:0=自动注册、1=手动录入',
  `address_list` text COMMENT '执行器地址列表,多地址逗号分隔',
  `update_time` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `username` varchar(50) NOT NULL COMMENT '账号',
  `password` varchar(50) NOT NULL COMMENT '密码',
  `role` tinyint(4) NOT NULL COMMENT '角色:0-普通用户、1-管理员',
  `permission` varchar(255) DEFAULT NULL COMMENT '权限:执行器ID列表,多个逗号分割',
  PRIMARY KEY (`id`),
  UNIQUE KEY `i_username` (`username`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

CREATE TABLE `xxl_job_lock` (
  `lock_name` varchar(50) NOT NULL COMMENT '锁名称',
  PRIMARY KEY (`lock_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

INSERT INTO `xxl_job_group`(`id`, `app_name`, `title`, `address_type`, `address_list`, `update_time`) VALUES (1, 'xxl-job-executor-sample', '示例执行器', 0, NULL, '2018-11-03 22:21:31' );
INSERT INTO `xxl_job_info`(`id`, `job_group`, `job_desc`, `add_time`, `update_time`, `author`, `alarm_email`, `schedule_type`, `schedule_conf`, `misfire_strategy`, `executor_route_strategy`, `executor_handler`, `executor_param`, `executor_block_strategy`, `executor_timeout`, `executor_fail_retry_count`, `glue_type`, `glue_source`, `glue_remark`, `glue_updatetime`, `child_jobid`) VALUES (1, 1, '测试任务1', '2018-11-03 22:21:31', '2018-11-03 22:21:31', 'XXL', '', 'CRON', '0 0 0 * * ? *', 'DO_NOTHING', 'FIRST', 'demoJobHandler', '', 'SERIAL_EXECUTION', 0, 0, 'BEAN', '', 'GLUE代码初始化', '2018-11-03 22:21:31', '');
INSERT INTO `xxl_job_user`(`id`, `username`, `password`, `role`, `permission`) VALUES (1, 'admin', 'e10adc3949ba59abbe56e057f20f883e', 1, NULL);
INSERT INTO `xxl_job_lock` ( `lock_name`) VALUES ( 'schedule_lock');

commit;

2.1 Method 1: Source code construction method

2.1.1 Source code download location

=> Portal: github warehouse address:
=> Portal: gitee warehouse address

2.1.2 Idea finds the admin concurrent package

xxl-job-admin: This is the most critical task scheduling center.
xxl-job-executor-samples Ignoring the test function is quite simple. There is no need to watch this.

2.1.3 Find application.properties

Mainly modify the parameters of mysql

### 服务部署的端口,
server.port=8080
server.servlet.context-path=/xxl-job-admin

###调度中心JDBC链接:链接地址和之前所创建的调度数据库的地址一致
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/xxl_job?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&serverTimezone=Asia/Shanghai
spring.datasource.username=root
spring.datasource.password=root_pwd
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

## 调度线程池最大线程配置【必填】
xxl.job.triggerpool.fast.max=200
xxl.job.triggerpool.slow.max=100

### 调度中心日志表数据保存天数 [必填]:过期日志自动清理;限制大于等于7时生效,否则, 如-1,关闭自动清理功能;
xxl.job.logretentiondays=30

2.1.4 maven packaging to generate jar files

mvn clean compile package install

2.2 Method 2: Make an image with docker (*)

2.2.1 docker pull configuration

docker pull xuxueli/xxl-job-admin:2.4.0

2.2.2 Create and run docker image

-> (1) Run the command

docker run -di -e PARAMS="--spring.datasource.url=jdbc:mysql://192.168.1.29:3306/xxl_job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=pzy123 --xxl.job.accessToken=pingzhuyan.test" \
-p 9001:8080 \
-v /usr/local/src/docker/xxl-job:/data/applogs \
--name xxl-job \
--privileged=true \
xuxueli/xxl-job-admin:2.4.0

-> (2) Parameter explanation

--privileged=trueThe root in the container has 真正的rootthe permission
-v to mount the directory. This is actually a two-layer xxl-job, that is, 可以不写xxl-job文件
the -p port is on the left side of the host and on the right side of the container
– xxl.job.accessToken=pingzhuyan.test accessTokenis useful later (the above is changed in the configuration file)

2.3 View startup results

2.3.1 Direct access address

http://192.168.1.29:9001/xxl-job-admin

2.3.2 In case of access exception

(Open firewall port) systemctl operation

2.3.3 Check whether the service startup is normal

Database error. Change database information.
File does not have permission. Add root permission.

2.3.4 Related instructions for docker operation

Docker related operation column ===> Portal
is recorded in the column above docker相关的操作. If you are interested, you can take a look.

# 查看日志
docker container logs xxl-job
# 查看所有容器
docker ps -a 
# 删除容器
docker container rm -f xxl-job
# 删除images镜像
docker images rmi -f 镜像id
# 进入容器bash操作
docker exec -it xxl-job bash
# 重启服务
docker restart xxl-job

3. Springboot integrates xxl-job (new version 2.4.0)

Version 2.3.0-2.4.0writing has not changed much

3.1 Configuration file

3.1.1 yml configuration code

xxl:
  job:
    admin:
      # 调度中心服务部署的地址
      addresses: http://192.168.1.29:9001/xxl-job-admin
    # 执行器通讯TOKEN,要和调度中心服务部署配置的accessToken一致,要不然无法连接注册
    accessToken: pingzhuyan.test
    executor:
      # 执行器AppName
      appname: pzy-beta1
      # 执行器注册 [选填] 
      address:
      ip:
      #执行器端口号: 小于等于0则自动获取 默认端口为9999,单机部署多个执行器时,注意要配置不同执行器端口;
      port: 0
      # 执行器运行日志文件存储磁盘路径 [选填] 需要对该路径拥有读写权限;为空则使用默认路径;
      logpath: D:/usr/local/src/xxl-job
      # 执行器日志文件保存天数 [选填] 过期日志自动清理, 限制值大于等于3时生效  否则, 如-1, 关闭自动清理功能
      logretentiondays: 15 

3.1.2 properties configuration file

### 调度中心的地址 ,就是 xxl-job-admin 这个服务的地址
xxl.job.admin.addresses=http://192.168.1.29:9001/xxl-job-admin
 
### 要和xxl-job-admin 中的accessToken统一 (可以没有)
xxl.job.accessToken=pingzhuyan.test
 
### 执行器名称,可自定义
xxl.job.executor.appname=pzy-beta1
### 会将该地址注册到调度中心,调度中心会用该地址调度任务, 可为空默认就是 ip:port , 端口不可以和业务端口重复
xxl.job.executor.address=
### 可为空,默认获取本机ip
xxl.job.executor.ip=
xxl.job.executor.port=0
### 运行日志所保存的路径
xxl.job.executor.logpath=D:/usr/local/src/xxl-job
### 日志存放时间
xxl.job.executor.logretentiondays=15

3.2 Explanation of some parameters in the configuration

3.2.1 Address of dispatch center

服务部署The address of the dispatch center
ps: Do not deploy on the public network and use intranet penetration for intranet testing.麻烦

3.2.2 Derived from accessToken

There is a parameter when docker is created --xxl.job.accessToken=pingzhuyan.test
. Its function 执行器通讯TOKEN,要和调度中心服务部署配置的accessToken一致,要不然无法连接注册
is as 下图shown:
Insert image description here

3.2.3 Origin of actuator name

Add a new executor and copy appName as shown in the figure
Insert image description here

3.2.4 Explanation of registration address ip and port

  • This configuration is preferred as the registration address. If it is empty, the embedded service "IP:PORT" is used as the registration address. This provides more flexible support for container-type executor dynamic IP and dynamic mapping port issues.
  • 端口Required 0 is automatically assigned. 默认9999This port is fixed when the cluster starts.启动失败

3.2.5 logpath operation log storage location [optional]

windowsDownload this path as shown in the figure, and a log file will be left every time it is executed.
Insert image description here

3.2.6 logretentiondays log storage time [optional]

Expired logs are automatically cleaned, and it takes effect when the limit value is greater than or equal to 3; otherwise, such as -1, the automatic cleaning function is turned off

3.3 springboot code integration

3.3.1 How to configure config

XxlJobSpringExecutorThis part is very similar, the writing method is similar to the bean mode.
The bottom comment XxlJobExecutor.registJobHandler("pzyBetaHandler", new TaskDispatch());
is that the entire class is in scheduled task mode, GLUE (java) 不能Autowiredcan only use new

import com.xxl.job.core.executor.impl.XxlJobSpringExecutor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

/**
 * @author pzy
 * @version 0.1.0
 */
@Configuration
@Slf4j
public class XxlJobConfig {
    
    

    @Value("${xxl.job.admin.addresses}")
    private String adminAddresses;

    @Value("${xxl.job.accessToken}")
    private String accessToken;

    @Value("${xxl.job.executor.appname}")
    private String appname;

    @Value("${xxl.job.executor.address}")
    private String address;

    @Value("${xxl.job.executor.ip}")
    private String ip;

    @Value("${xxl.job.executor.port}")
    private int port;

    @Value("${xxl.job.executor.logpath}")
    private String logPath;

    @Value("${xxl.job.executor.logretentiondays}")
    private int logRetentionDays;

    @Bean
    public XxlJobSpringExecutor xxlJobExecutor() {
    
    
        log.info("===> pzy xxl-job Bean执行开始");
        XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
        xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
        xxlJobSpringExecutor.setAppname(appname);
        xxlJobSpringExecutor.setAddress(address);
        xxlJobSpringExecutor.setIp(ip);
        xxlJobSpringExecutor.setPort(port);
        xxlJobSpringExecutor.setAccessToken(accessToken);
        xxlJobSpringExecutor.setLogPath(logPath);
        xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);
        log.info("===> pzy xxl-job Bean执行成功");

//        XxlJobExecutor.registJobHandler("pzyBetaHandler", new TaskDispatch());

        return xxlJobSpringExecutor;
    }
}

3.3.2 Test code (bean mode)

import com.xxl.job.core.handler.annotation.XxlJob;
import org.springframework.stereotype.Component;
 
/**
 * @author pzy
 * @version 0.1.0
 */
@Component
public class TestTask {
    
    
 
    @XxlJob("pzyBetaHandler")
    public void pzyBetaHandler() throws Exception {
    
    
        System.out.println("hello---->xxl-job");
        // default success
    }
}

3.3.3 Official test code (new version 3.4.0)

There are a lot of searches on the old version of Baidu, see the detailed 本文目录:4.2.1explanation

import com.xxl.job.core.biz.model.ReturnT;
import com.xxl.job.core.context.XxlJobHelper;
import com.xxl.job.core.handler.annotation.XxlJob;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;

import java.io.BufferedInputStream;
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.Arrays;
import java.util.concurrent.TimeUnit;

/**
 * 一个开发示例 已经更新为2.4.0新版了
 *
 * XxlJob开发示例(Bean模式)
 * <p>
 * 开发步骤:
 * 1、在Spring Bean实例中,开发Job方法,方式格式要求为 "public ReturnT<String> execute(String param)"
 * 2、为Job方法添加注解 "@XxlJob(value="自定义jobhandler名称", init = "JobHandler初始化方法", destroy = "JobHandler销毁方法")",注解value值对应的是调度中心新建任务的JobHandler属性的值。
 * 3、执行日志:需要通过 "XxlJobHelper.log" 打印执行日志;
 *
 * @author xuxueli 2019-12-11 21:52:51
 */
@Component
public class SampleXxlJob {
    
    
 private static Logger logger = LoggerFactory.getLogger(SampleXxlJob.class);

 /**
  * 1、简单任务示例(Bean模式)
  */
 @XxlJob("demoJobHandler")
 public ReturnT<String> demoJobHandler(String param) throws Exception {
    
    
  XxlJobHelper.log("XXL-JOB, Hello World.");

  for (int i = 0; i < 5; i++) {
    
    
   XxlJobHelper.log("beat at:" + i);
   TimeUnit.SECONDS.sleep(2);
  }
  return ReturnT.SUCCESS;
 }

 /**
  * 2、分片广播任务
  */
 @XxlJob("shardingJobHandler")
 public ReturnT<String> shardingJobHandler(String param) throws Exception {
    
    

  // 分片参数
//  ShardingUtil.ShardingVO shardingVO = ShardingUtil.getShardingVo(); 2.3.0 更新了
  XxlJobHelper.log("分片参数:当前分片序号 = {}, 总分片数 = {}", XxlJobHelper.getShardIndex(), XxlJobHelper.getShardTotal());

  // 业务逻辑
  for (int i = 0; i < XxlJobHelper.getShardTotal(); i++) {
    
    
   if (i == XxlJobHelper.getShardIndex()) {
    
    
    XxlJobHelper.log("第 {} 片, 命中分片开始处理", i);
   } else {
    
    
    XxlJobHelper.log("第 {} 片, 忽略", i);
   }
  }

  return ReturnT.SUCCESS;
 }

 /**
  * 3、命令行任务
  */
 @XxlJob("commandJobHandler")
 public ReturnT<String> commandJobHandler(String param) throws Exception {
    
    
  String command = param;
  int exitValue = -1;

  BufferedReader bufferedReader = null;
  try {
    
    
   // command process
   Process process = Runtime.getRuntime().exec(command);
   BufferedInputStream bufferedInputStream = new BufferedInputStream(process.getInputStream());
   bufferedReader = new BufferedReader(new InputStreamReader(bufferedInputStream));

   // command log
   String line;
   while ((line = bufferedReader.readLine()) != null) {
    
    
    XxlJobHelper.log(line);
   }

   // command exit
   process.waitFor();
   exitValue = process.exitValue();
  } catch (Exception e) {
    
    
   XxlJobHelper.log(e);
  } finally {
    
    
   if (bufferedReader != null) {
    
    
    bufferedReader.close();
   }
  }

  if (exitValue == 0) {
    
    
   return ReturnT.SUCCESS;
  } else {
    
    
   return new ReturnT<String>(ReturnT.FAIL.getCode(), "command exit value(" + exitValue + ") is failed");
  }
 }

 /**
  * 4、跨平台Http任务
  * 参数示例:
  * "url: http://www.baidu.com\n" +
  * "method: get\n" +
  * "data: content\n";
  */
 @XxlJob("httpJobHandler")
 public ReturnT<String> httpJobHandler(String param) throws Exception {
    
    

  // param parse
  if (param == null || param.trim().length() == 0) {
    
    
   XxlJobHelper.log("param[" + param + "] invalid.");
   return ReturnT.FAIL;
  }
  String[] httpParams = param.split("\n");
  String url = null;
  String method = null;
  String data = null;
  for (String httpParam : httpParams) {
    
    
   if (httpParam.startsWith("url:")) {
    
    
    url = httpParam.substring(httpParam.indexOf("url:") + 4).trim();
   }
   if (httpParam.startsWith("method:")) {
    
    
    method = httpParam.substring(httpParam.indexOf("method:") + 7).trim().toUpperCase();
   }
   if (httpParam.startsWith("data:")) {
    
    
    data = httpParam.substring(httpParam.indexOf("data:") + 5).trim();
   }
  }

  // param valid
  if (url == null || url.trim().length() == 0) {
    
    
   XxlJobHelper.log("url[" + url + "] invalid.");
   return ReturnT.FAIL;
  }
  if (method == null || !Arrays.asList("GET", "POST").contains(method)) {
    
    
   XxlJobHelper.log("method[" + method + "] invalid.");
   return ReturnT.FAIL;
  }

  // request
  HttpURLConnection connection = null;
  BufferedReader bufferedReader = null;
  try {
    
    
   // connection
   URL realUrl = new URL(url);
   connection = (HttpURLConnection) realUrl.openConnection();

   // connection setting
   connection.setRequestMethod(method);
   connection.setDoOutput(true);
   connection.setDoInput(true);
   connection.setUseCaches(false);
   connection.setReadTimeout(5 * 1000);
   connection.setConnectTimeout(3 * 1000);
   connection.setRequestProperty("connection", "Keep-Alive");
   connection.setRequestProperty("Content-Type", "application/json;charset=UTF-8");
   connection.setRequestProperty("Accept-Charset", "application/json;charset=UTF-8");

   // do connection
   connection.connect();

   // data
   if (data != null && data.trim().length() > 0) {
    
    
    DataOutputStream dataOutputStream = new DataOutputStream(connection.getOutputStream());
    dataOutputStream.write(data.getBytes("UTF-8"));
    dataOutputStream.flush();
    dataOutputStream.close();
   }

   // valid StatusCode
   int statusCode = connection.getResponseCode();
   if (statusCode != 200) {
    
    
    throw new RuntimeException("Http Request StatusCode(" + statusCode + ") Invalid.");
   }

   // result
   bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
   StringBuilder result = new StringBuilder();
   String line;
   while ((line = bufferedReader.readLine()) != null) {
    
    
    result.append(line);
   }
   String responseMsg = result.toString();

   XxlJobHelper.log(responseMsg);
   return ReturnT.SUCCESS;
  } catch (Exception e) {
    
    
   XxlJobHelper.log(e);
   return ReturnT.FAIL;
  } finally {
    
    
   try {
    
    
    if (bufferedReader != null) {
    
    
     bufferedReader.close();
    }
    if (connection != null) {
    
    
     connection.disconnect();
    }
   } catch (Exception e2) {
    
    
    XxlJobHelper.log(e2);
   }
  }

 }

 /**
  * 5、生命周期任务示例:任务初始化与销毁时,支持自定义相关逻辑;
  */
 @XxlJob(value = "demoJobHandler2", init = "init", destroy = "destroy")
 public ReturnT<String> demoJobHandler2(String param) throws Exception {
    
    
  XxlJobHelper.log("XXL-JOB, Hello World.");
  return ReturnT.SUCCESS;
 }

 public void init() {
    
    
  logger.info("init");
 }

 public void destroy() {
    
    
  logger.info("destory");
 }
}

4. Summary and Notes

4.1 Summary of this paper

  • Find documentation and code locations
  • create database
  • pull image
  • Run container
  • Open http://192.168.1.100:9001/xxl-job-admin
  • Enter admin 123456
  • Create executor
  • Create task management (select bean mode) remember the name of the bean
  • 最新版的2.4.0Add dependencies ( ) to the springboot project
  • Add configuration yml/properties
  • Write the configuration file (it doesn’t matter whether you write the above or not, just take a parameter)
  • Write test code @XxlJob(" 记住的bean的名字")
  • Start the program and test run it in xxl-job-admin
  • Everything is normal to test the cluster effect到此结束

4.2 Precautions

4.2.1 The wording of the new version 2.3.0+ has changed

  • Version 2.3.0 has modified the usage method. Use the old version to take a look at
    Insert image description here
    the 2.3.1 test code @XxlJob. Even if it is gray, ignore it.
    Insert image description here

At this point, xxl-job is all over. It is relatively simple to use, easy to manage, and easy to deploy the cluster.

Author: pingzhuyan

Guess you like

Origin blog.csdn.net/pingzhuyan/article/details/132562472