[Java] Beginner Vert.x (4)

6. Redis operation + JOOQ storage

Immediately after the previous chapter, this chapter will talk about how to use Redis for reading and writing operations in this project, and finally transfer the data to Mysql through JOOQ.
The use of Redis in the Vert.x project is relatively simple, and only one RedisUtil class is needed to complete the client configuration, as shown in the following figure:

public class RedisUtil {
    
    

    // 连接池最大大小
    private static final int MAX_POOL_SIZE = YamlUtil.getIntegerValue("redis.max-pool-size");
    // 连接池最大等待
    private static final int MAX_POOL_WAITING = YamlUtil.getIntegerValue("redis.max-pool-waiting");
    // 连接池超时
    private static final int POOL_RECYCLE_TIMEOUT = YamlUtil.getIntegerValue("redis.pool-recycle-timeout");
    // 最大等待用户
    private static final int MAX_WAITING_HANDLERS = YamlUtil.getIntegerValue("redis.max-waiting-handlers");
    // 连接字符串
    private static final String CONNECTION_STRING = YamlUtil.getStringValue("redis.connection-string");

    private RedisUtil() {
    
    }

    private static class SingletonInstance {
    
    
        private static final RedisUtil INSTANCE = new RedisUtil();
    }

    public static RedisUtil getInstance() {
    
    
        return SingletonInstance.INSTANCE;
    }

    /**
     * 
     * @MethodName: getConfiguration
     * @Description: redis配置
     * @author yuanzhenhui
     * @return RedisOptions
     * @date 2023-04-13 04:32:48
     */
    public RedisOptions getConfiguration() {
    
    
        RedisOptions options = new RedisOptions();
        options.setMaxPoolSize(MAX_POOL_SIZE);
        options.setMaxPoolWaiting(MAX_POOL_WAITING);
        options.setConnectionString(CONNECTION_STRING);
        options.setPoolRecycleTimeout(POOL_RECYCLE_TIMEOUT);
        options.setMaxWaitingHandlers(MAX_WAITING_HANDLERS);
        return options;
    }
}

The RedisUtil class only organizes configuration information, and the specific use is shown in the following figure:

public class SysUserBuriedRouter extends AbstractVerticle implements RouterSet {
    
    

    private static final Logger LOGGER = LogManager.getLogger(SysUserBuriedRouter.class);
    private static final SimpleDateFormat SIMPLE_DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");

    @PropLoader(key = "server.context")
    private static String context;
    private static RedisAPI redis;

    @Override
    public void start() {
    
    
        YamlUtil.propLoadSetter(this);

        // -----------------
        // 创建redis客户端连接
        // -----------------
        Redis.createClient(vertx, RedisUtil.getInstance().getConfiguration()).connect(onConnect -> {
    
    
            if (onConnect.succeeded()) {
    
    
                redis = RedisAPI.api(onConnect.result());
            }
        });
    }

    /**
     * 
     * @MethodName: sendBuriedPointInfo
     * @Description: restful处理类
     * @author yuanzhenhui
     * @param ctx
     *            void
     * @date 2023-04-13 05:02:52
     */
    public void sendBuriedPointInfo(RoutingContext ctx) {
    
    
        String jsonStr = ctx.getBodyAsString();
        if (!StringUtil.isNullOrEmpty(jsonStr)) {
    
    
            SysUserBuried puav = ReflectUtil.convertJson2Pojo(jsonStr, SysUserBuried.class);

            String uuid = UUID.randomUUID().toString();
            puav.setId(uuid);
            puav.setIp(IPUtil.getIpAddr(ctx));
            puav.setAccessDate(SIMPLE_DATE_FORMAT.format(new Date()));

            redis.setnx(uuid, puav.toJson().encode(), setnxResp -> {
    
    
                if (setnxResp.failed()) {
    
    
                    LOGGER.error("func[SysUserBuriedRouter.sendBuriedPointInfo] Exception [{} - {}]",
                        new Object[] {
    
    setnxResp.cause(), setnxResp.result()});
                }
            });
        }
        HttpServerResponse hsr =
            ctx.response().putHeader(CommonConstants.HTTP_CONTENT_TYPE, CommonConstants.HTTP_APPLICATION_JSON);
        hsr.end(Json.encode(new RespMsg(1, "Message received")));
    }

    /**
     * 
     * @MethodName: router4Restful
     * @Description: 实现路由转发
     * @author yuanzhenhui
     * @param router
     * @see io.kida.components.routers.RouterSet#router4Restful(io.vertx.ext.web.Router)
     * @date 2023-04-13 05:03:12
     */
    @Override
    public void router4Restful(Router router) {
    
    
        router.post(CommonConstants.HTTP_SLASH + context + CommonConstants.HTTP_SLASH + "sendBuriedPointInfo")
            .handler(this::sendBuriedPointInfo);
    }
}

Still take SysUserBuriedRouter as an example, since Vert.x has Redis API components, so here you can first initialize the Redis client in the start method, as shown in the following figure:

 @Override
public void start() {
    
    
    YamlUtil.propLoadSetter(this);

    // -----------------
    // 创建redis客户端连接
    // -----------------
    Redis.createClient(vertx, RedisUtil.getInstance().getConfiguration()).connect(onConnect -> {
    
    
        if (onConnect.succeeded()) {
    
    
            redis = RedisAPI.api(onConnect.result());
        }
    });
}

Call the Redis.createClient method, pass the vertx instance as the first parameter, and get the redis configuration information through the singleton as the second parameter, then call connect and get its onConnect callback. When onConnect is succeeded, pass the result of onConnect to the RedisAPI.api method to complete the creation of the redis client.
Since the Redis clients provided by Vert.x are processed asynchronously, all redis methods involve callbacks, as shown in the figure below:

public void sendBuriedPointInfo(RoutingContext ctx) {
    
    
	String jsonStr = ctx.getBodyAsString();
    if (!StringUtil.isNullOrEmpty(jsonStr)) {
    
    
        SysUserBuried puav = ReflectUtil.convertJson2Pojo(jsonStr, SysUserBuried.class);

        String uuid = UUID.randomUUID().toString();
        puav.setId(uuid);
        puav.setIp(IPUtil.getIpAddr(ctx));
        puav.setAccessDate(SIMPLE_DATE_FORMAT.format(new Date()));

        redis.setnx(uuid, puav.toJson().encode(), setnxResp -> {
    
    
            if (setnxResp.failed()) {
    
    
                LOGGER.error("func[SysUserBuriedRouter.sendBuriedPointInfo] Exception [{} - {}]",
                    new Object[] {
    
    setnxResp.cause(), setnxResp.result()});
            }
        });
    }
    HttpServerResponse hsr =
        ctx.response().putHeader(CommonConstants.HTTP_CONTENT_TYPE, CommonConstants.HTTP_APPLICATION_JSON);
    hsr.end(Json.encode(new RespMsg(1, "Message received")));
}

It is not difficult to see that, in fact, the calling method of the Redis client is almost the same as that of using the CLI, so I won't go into details here.
Because this is just a demo, the way to deal with it is also very simple. In the sendBuriedPointInfo method, first convert the incoming Json string to an object. Here, use the self-encapsulated method ReflectUtil.convertJson2Pojo to complete it. Of course, you can use other methods such as fastjson to complete it. After the object is converted, the information is organized again, and then converted to json and stored in redis.

The above is the operation of storing data in redis, so how to extract the data in redis and transfer it to Mysql through JOOQ? Before talking about the business code, let's take a look at the configuration of JOOQ.

<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>io.kida</groupId>
  <artifactId>buried-vtx-demo</artifactId>
  <version>1.0.0</version>
  <name>buried-vtx-demo</name>
  <description>数据埋点vtx演示</description>

  <properties>
   	...
    <jooq.version>3.13.2</jooq.version>
    <jooq.rx.version>4.2.0</jooq.rx.version>
    <yaml.version>1.1.3</yaml.version>
    <hikari.version>3.4.5</hikari.version>
		...
  </properties>

  <dependencies>
    ...
    
    <!-- HikariCP 数据库连接池 -->
    <dependency>
      <groupId>com.zaxxer</groupId>
      <artifactId>HikariCP</artifactId>
      <version>${hikari.version}</version>
    </dependency>

    <!-- jooq 依赖 -->
    <dependency>
      <groupId>org.jooq</groupId>
      <artifactId>jooq</artifactId>
      <version>${jooq.version}</version>
    </dependency>
    <dependency>
      <groupId>io.github.jklingsporn</groupId>
      <artifactId>vertx-jooq-completablefuture-jdbc</artifactId>
      <version>${jooq.rx.version}</version>
    </dependency>
 </dependencies>

  <build>
		...
			<!-- yaml 读取 maven 插件,可以让 pom 读取 yaml 文件配置 -->
			<plugin>
				<groupId>it.ozimov</groupId>
				<artifactId>yaml-properties-maven-plugin</artifactId>
				<version>${yaml.version}</version>
				<executions>
					<execution>
						<phase>initialize</phase>
						<goals>
							<goal>read-project-properties</goal>
						</goals>
						<configuration>
							<files>
                
                <!-- yaml 文件配置地址 -->
								<file>
									src/main/resources/configs/master/application-datasource.yml</file>
							</files>
						</configuration>
					</execution>
				</executions>
			</plugin>

			<!-- jooq codegen maven 插件 -->
			<plugin>
				<groupId>org.jooq</groupId>
				<artifactId>jooq-codegen-maven</artifactId>
				<version>${jooq.version}</version>
				<executions>
					<execution>
						<goals>
              <!-- 发出 generate 指令 -->
							<goal>generate</goal>
						</goals>
					</execution>
				</executions>

        <!-- 添加生成依赖包 -->
				<dependencies>
					<dependency>
						<groupId>io.github.jklingsporn</groupId>
						<artifactId>vertx-jooq-generate</artifactId>
						<version>${jooq.rx.version}</version>
					</dependency>
				</dependencies>

        <!-- 逆向工程配置信息 -->
				<configuration>
					<jdbc>
						<driver>${datasource.driver-class-name}</driver>
						<url>
							${datasource.url-head}//${datasource.host}:${datasource.port}/${datasource.database-name}?useUnicode=${datasource.use-unicode}%26characterEncoding=${datasource.character-encoding}%26useSSL=${datasource.ssl-enable}%26serverTimezone=${datasource.server-timezone}</url>
						<user>${datasource.username}</user>
						<password>${datasource.password}</password>
					</jdbc>
					<generator>
						<name>
							io.github.jklingsporn.vertx.jooq.generate.completablefuture.CompletableFutureJDBCVertxGenerator
						</name>
						<database>
							<name>${datasource.jooq-name}</name>
							<includes>.*</includes>
							<excludes>flyway_schema_history</excludes>
							<inputSchema>${datasource.database-name}</inputSchema>
						</database>
						<generate>
							<pojos>true</pojos>
							<javaTimeTypes>true</javaTimeTypes>
							<daos>true</daos>
							<fluentSetters>true</fluentSetters>
						</generate>
						<target>
							<packageName>${datasource.package-path}</packageName>
							<directory>src/main/java</directory>
						</target>
						<strategy>
							<name>
								io.github.jklingsporn.vertx.jooq.generate.VertxGeneratorStrategy
							</name>
						</strategy>
					</generator>
				</configuration>
			</plugin>
			...
		</plugins>
	</build>
</project>

From the pom.xml above, we know that JOOQ interacts with Mysql through the HikariCP connection pool, and we also use jooq-codegen-maven to reverse engineer the database to generate entity class code. When compiling via mvn clean package you will see the following output:

[INFO] Scanning for projects...
[INFO] 
...
[INFO] ----------------------------------------------------------
[INFO]   Thank you for using jOOQ and jOOQ's code generator
[INFO]                          
[INFO] Database parameters      
[INFO] ----------------------------------------------------------
[INFO]   dialect                : MYSQL
[INFO]   URL                    : jdbc:mysql://127.0.0.1:3506/tools?useUnicode=true%26characterEncoding=utf8%26useSSL=false%26serverTimezone=UTC
[INFO]   target dir             : /Users/yuanzhenhui/Documents/code_space/github/buried-vtx-demo/src/main/java
[INFO]   target package         : io.kida.model
[INFO]   includes               : [.*]
[INFO]   excludes               : [flyway_schema_history]
[INFO]   includeExcludeColumns  : false
[INFO] ----------------------------------------------------------
[INFO]                          
[INFO] JavaGenerator parameters 
[INFO] ----------------------------------------------------------
[INFO]   annotations (generated): false
[INFO]   annotations (JPA: any) : false
[INFO]   annotations (JPA: version): 
[INFO]   annotations (validation): false
[INFO]   comments               : true
[INFO]   comments on attributes : true
[INFO]   comments on catalogs   : true
[INFO]   comments on columns    : true
[INFO]   comments on keys       : true
[INFO]   comments on links      : true
[INFO]   comments on packages   : true
[INFO]   comments on parameters : true
[INFO]   comments on queues     : true
[INFO]   comments on routines   : true
[INFO]   comments on schemas    : true
[INFO]   comments on sequences  : true
[INFO]   comments on tables     : true
[INFO]   comments on udts       : true
[INFO]   sources                : true
[INFO]   sources on views       : true
[INFO]   daos                   : true
[INFO]   deprecated code        : true
[INFO]   global references (any): true
[INFO]   global references (catalogs): true
[INFO]   global references (keys): true
[INFO]   global references (links): true
[INFO]   global references (queues): true
[INFO]   global references (routines): true
[INFO]   global references (schemas): true
[INFO]   global references (sequences): true
[INFO]   global references (tables): true
[INFO]   global references (udts): true
[INFO]   indexes                : true
[INFO]   instance fields        : true
[INFO]   interfaces             : false
[INFO]   interfaces (immutable) : false
[INFO]   javadoc                : true
[INFO]   keys                   : true
[INFO]   links                  : true
[INFO]   pojos                  : true
[INFO]   pojos (immutable)      : false
[INFO]   queues                 : true
[INFO]   records                : true
[INFO]   routines               : true
[INFO]   sequences              : true
[INFO]   sequenceFlags          : true
[INFO]   table-valued functions : true
[INFO]   tables                 : true
[INFO]   udts                   : true
[INFO]   relations              : true
[INFO] ----------------------------------------------------------
[INFO]                          
[INFO] Generation remarks       
[INFO] ----------------------------------------------------------
[INFO]                          
[INFO] ----------------------------------------------------------
[INFO] Generating catalogs      : Total: 1
[INFO] 

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@  @@        @@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@        @@@@@@@@@@
@@@@@@@@@@@@@@@@  @@  @@    @@@@@@@@@@
@@@@@@@@@@  @@@@  @@  @@    @@@@@@@@@@
@@@@@@@@@@        @@        @@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@        @@        @@@@@@@@@@
@@@@@@@@@@    @@  @@  @@@@  @@@@@@@@@@
@@@@@@@@@@    @@  @@  @@@@  @@@@@@@@@@
@@@@@@@@@@        @@  @  @  @@@@@@@@@@
@@@@@@@@@@        @@        @@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@  @@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  Thank you for using jOOQ 3.13.2

[INFO] ARRAYs fetched           : 0 (0 included, 0 excluded)
[INFO] Enums fetched            : 0 (0 included, 0 excluded)
[INFO] Packages fetched         : 0 (0 included, 0 excluded)
[INFO] Routines fetched         : 0 (0 included, 0 excluded)
[INFO] Sequences fetched        : 0 (0 included, 0 excluded)
[INFO] Tables fetched           : 15 (15 included, 0 excluded)
[INFO] No schema version is applied for catalog . Regenerating.
[INFO]                          
[INFO] Generating catalog       : DefaultCatalog.java
[INFO] ==========================================================
[INFO] Generating schemata      : Total: 1
[INFO] No schema version is applied for schema tools. Regenerating.
[INFO] Generating schema        : Tools.java
[INFO] ----------------------------------------------------------
[INFO] UDTs fetched             : 0 (0 included, 0 excluded)
[INFO] Generating tables        
[INFO] Synthetic primary keys   : 0 (0 included, 0 excluded)
[INFO] Overriding primary keys  : 19 (0 included, 19 excluded)
[INFO] Generating table         : CmmItembank.java [input=cmm_itembank, output=cmm_itembank, pk=KEY_cmm_itembank_PRIMARY]
[INFO] Embeddables fetched      : 0 (0 included, 0 excluded)
[INFO] Indexes fetched          : 16 (16 included, 0 excluded)
...
[INFO] Generating table         : SysUserBuried.java [input=sys_user_buried, output=sys_user_buried, pk=KEY_sys_user_buried_PRIMARY]
...
[INFO] Tables generated         : Total: 10.023s
[INFO] Generating table POJOs   
...
[INFO] Generating POJO          : SysUserBuried.java
...
[INFO] Table POJOs generated    : Total: 10.169s, +146.329ms
[INFO] Generating DAOs          
...
[INFO] Generating DAO           : SysUserBuriedDao.java
...
[INFO] Table DAOs generated     : Total: 10.215s, +45.81ms
[INFO] Generating table references
[INFO] Table refs generated     : Total: 10.22s, +4.468ms
[INFO] Generating Keys          
[INFO] Keys generated           : Total: 10.231s, +11.205ms
[INFO] Generating Indexes       
[INFO] Indexes generated        : Total: 10.235s, +4.645ms
[INFO] Generating table records 
...
[INFO] Generating record        : SysUserBuriedRecord.java
...
[INFO] Table records generated  : Total: 10.356s, +120.296ms
[INFO] Domains fetched          : 0 (0 included, 0 excluded)
[INFO] Generation finished: tools: Total: 10.36s, +4.609ms
[INFO]                          
[INFO] Removing excess files    
...

After executing maven compilation, the automatically generated files can be obtained, as shown in the figure below:

.
|-- DefaultCatalog.java
|-- Indexes.java
|-- Keys.java
|-- Tables.java
|-- Tools.java
`-- tables
    ...
    |-- SysUserBuried.java
    |-- daos
		...
    |   `-- SysUserBuriedDao.java
    |-- pojos
		...
    |   `-- SysUserBuried.java
    `-- records
		...
        `-- SysUserBuriedRecord.java

Then we can configure the HikariCP data source and add it to the JOOQ configuration, as shown below:

public class JOOQUtil {
    
    

    private static final String HOST = YamlUtil.getStringValue("datasource.host");
    private static final String DATABASE = YamlUtil.getStringValue("datasource.database-name");
    private static final String USE_UNICODE = YamlUtil.getStringValue("datasource.use-unicode");
    private static final String CHARACTER_ENCODING = YamlUtil.getStringValue("datasource.character-encoding");
    private static final String SERVER_TIMEZONE = YamlUtil.getStringValue("datasource.server-timezone");
    private static final String URL_HEAD = YamlUtil.getStringValue("datasource.url-head");
    private static final String DRIVER_CLASS_NAME = YamlUtil.getStringValue("datasource.driver-class-name");
    private static final String PASSWORD = YamlUtil.getStringValue("datasource.password");
    private static final String USERNAME = YamlUtil.getStringValue("datasource.username");
    private static final String CONNECTION_TEST_QUERY =
        YamlUtil.getStringValue("datasource.hikari.connection-test-query");
    private static final String POOL_NAME = YamlUtil.getStringValue("datasource.hikari.pool-name");

    private static final int MINIMUM_IDLE = YamlUtil.getIntegerValue("datasource.hikari.minimum-idle");
    private static final int PORT = YamlUtil.getIntegerValue("datasource.port");
    private static final int MAXIMUM_POOL_SIZE = YamlUtil.getIntegerValue("datasource.hikari.maximum-pool-size");
    private static final int IDLE_TIMEOUT = YamlUtil.getIntegerValue("datasource.hikari.idle-timeout");
    private static final int CONNECTION_TIMEOUT = YamlUtil.getIntegerValue("datasource.hikari.connection-timeout");
    private static final int PREP_STMT_CACHE_SIZE = YamlUtil.getIntegerValue("datasource.hikari.prep-stmt-cache-size");
    private static final int PREP_STMT_CACHE_SQL_LIMIT =
        YamlUtil.getIntegerValue("datasource.hikari.prep-stmt-cache-sql-limit");

    private static final boolean SSL = YamlUtil.getBooleanValue("datasource.ssl-enable");
    private static final boolean IS_AUTO_COMMIT = YamlUtil.getBooleanValue("datasource.hikari.is-auto-commit");
    private static final boolean ALLOW_POOL_SUSPENSION =
        YamlUtil.getBooleanValue("datasource.hikari.allow-pool-suspension");
    private static final boolean CACHE_PREP_STMTS = YamlUtil.getBooleanValue("datasource.hikari.cache-prep-stmts");
    private static final boolean USE_SERVER_PREP_STMTS =
        YamlUtil.getBooleanValue("datasource.hikari.use-server-prep-stmts");
    private static final boolean USE_LOCAL_SESSION_STATE =
        YamlUtil.getBooleanValue("datasource.hikari.use-local-session-state");
    private static final boolean REWRITE_BATCHED_STATEMENTS =
        YamlUtil.getBooleanValue("datasource.hikari.rewrite-batched-statements");
    private static final boolean CACHE_RESULT_SET_METADATA =
        YamlUtil.getBooleanValue("datasource.hikari.cache-result-set-metadata");
    private static final boolean CACHE_SERVER_CONFIGURATION =
        YamlUtil.getBooleanValue("datasource.hikari.cache-server-configuration");
    private static final boolean ELIDE_SET_AUTO_COMMITS =
        YamlUtil.getBooleanValue("datasource.hikari.elide-set-auto-commits");
    private static final boolean MAINTAIN_TIME_STATS =
        YamlUtil.getBooleanValue("datasource.hikari.maintain-time-stats");
    private static final boolean ALLOW_PUBLIC_KEY_RETRIEVAL =
        YamlUtil.getBooleanValue("datasource.hikari.allow-public-key-retrieval");

    private static final String URL =
        URL_HEAD + "//" + HOST + ":" + PORT + CommonConstants.HTTP_SLASH + DATABASE + "?useUnicode=" + USE_UNICODE
            + "&characterEncoding=" + CHARACTER_ENCODING + "&useSSL=" + SSL + "&serverTimezone=" + SERVER_TIMEZONE;

    private JOOQUtil() {
    
    }

    /**
     * 
     * @MethodName: getConfiguration
     * @Description: 获取配置实例并设置jooq当前对应数据库
     * @author yuanzhenhui
     * @return Configuration
     * @date 2023-04-13 04:31:36
     */
    public Configuration getConfiguration() {
    
    
        Configuration configuration = new DefaultConfiguration();
        configuration.set(SQLDialect.MYSQL);
        configuration.set(getHikariCPDataProvider());
        return configuration;
    }

    private static class SingletonInstance {
    
    
        private static final JOOQUtil INSTANCE = new JOOQUtil();
        private static final ConnectionProvider provider = new DataSourceConnectionProvider(getHikariCPDataSource());
    }

    public static JOOQUtil getInstance() {
    
    
        return SingletonInstance.INSTANCE;
    }

    public static final ConnectionProvider getHikariCPDataProvider() {
    
    
        return SingletonInstance.provider;
    }

    /**
     * 
     * @MethodName: getHikariCPDataSource
     * @Description: 数据库连接池配置信息
     * @author yuanzhenhui
     * @return DataSource
     * @date 2023-04-13 04:31:48
     */
    private static DataSource getHikariCPDataSource() {
    
    
        HikariConfig hikariConfig = new HikariConfig();
        hikariConfig.setJdbcUrl(URL);
        hikariConfig.setDriverClassName(DRIVER_CLASS_NAME);
        hikariConfig.setUsername(USERNAME);
        hikariConfig.setPassword(PASSWORD);
        hikariConfig.setAutoCommit(IS_AUTO_COMMIT);
        hikariConfig.setAllowPoolSuspension(ALLOW_POOL_SUSPENSION);
        hikariConfig.setConnectionTestQuery(CONNECTION_TEST_QUERY);
        hikariConfig.setPoolName(POOL_NAME);
        hikariConfig.setMinimumIdle(MINIMUM_IDLE);
        hikariConfig.setMaximumPoolSize(MAXIMUM_POOL_SIZE);
        hikariConfig.setIdleTimeout(IDLE_TIMEOUT);
        hikariConfig.setConnectionTimeout(CONNECTION_TIMEOUT);
        hikariConfig.addDataSourceProperty("cachePrepStmts", CACHE_PREP_STMTS);
        hikariConfig.addDataSourceProperty("prepStmtCacheSize", PREP_STMT_CACHE_SIZE);
        hikariConfig.addDataSourceProperty("prepStmtCacheSqlLimit", PREP_STMT_CACHE_SQL_LIMIT);
        hikariConfig.addDataSourceProperty("useServerPrepStmts", USE_SERVER_PREP_STMTS);
        hikariConfig.addDataSourceProperty("useLocalSessionState", USE_LOCAL_SESSION_STATE);
        hikariConfig.addDataSourceProperty("useSsl", SSL);
        hikariConfig.addDataSourceProperty("serverTimezone", SERVER_TIMEZONE);
        hikariConfig.addDataSourceProperty("rewriteBatchedStatements", REWRITE_BATCHED_STATEMENTS);
        hikariConfig.addDataSourceProperty("cacheResultSetMetadata", CACHE_RESULT_SET_METADATA);
        hikariConfig.addDataSourceProperty("cacheServerConfiguration", CACHE_SERVER_CONFIGURATION);
        hikariConfig.addDataSourceProperty("elideSetAutoCommits", ELIDE_SET_AUTO_COMMITS);
        hikariConfig.addDataSourceProperty("maintainTimeStats", MAINTAIN_TIME_STATS);
        hikariConfig.addDataSourceProperty("allowPublicKeyRetrieval", ALLOW_PUBLIC_KEY_RETRIEVAL);
        return new HikariDataSource(hikariConfig);
    }
}

So far, all the configurations of JOOQ have been completed, so how to use it? In fact, it is similar to Redis. We illustrate it through an example, as shown in the figure below:

public class SysUserBuriedService extends AbstractVerticle {
    
    

    private static final Logger LOGGER = LogManager.getLogger(SysUserBuriedService.class);

    // 每个批次执行行数
    @PropLoader(key = "redis.batch-num")
    private static int batchNum;

    private static SysUserBuriedDao subDao;
    private static RedisAPI redis;

    private List<String> paramList;
    private String blockNum = "0";

    @Override
    public void start() {
    
    
        YamlUtil.propLoadSetter(this);
        subDao = new SysUserBuriedDao(JOOQUtil.getInstance().getConfiguration(), vertx);

        // -----------------
        // 创建redis客户端连接
        // -----------------
        Redis.createClient(vertx, RedisUtil.getInstance().getConfiguration()).connect(onConnect -> {
    
    
            if (onConnect.succeeded()) {
    
    
                redis = RedisAPI.api(onConnect.result());
            }
        });
    }

    /**
     * 
     * @MethodName: cron2SaveRedisAccess
     * @Description: 定时保存redis数据
     * @author yuanzhenhui void
     * @date 2023-04-13 05:04:00
     */
    public void cron2SaveRedisAccess() {
    
    
        LOGGER.debug("func SysUserBuriedService.cron2SaveRedisAccess has begin!! ");
        paramList = new ArrayList<>();
        paramList.add(blockNum);
        paramList.add("COUNT");
        paramList.add(String.valueOf(batchNum));
        redis.scan(paramList, this::redisScanToGet);
    }

    /**
     * 
     * @MethodName: redisScanToGet
     * @Description: redis做扫描并获取数据
     * @author yuanzhenhui
     * @param scanResp
     *            void
     * @date 2023-04-13 05:04:16
     */
    private void redisScanToGet(AsyncResult<Response> scanResp) {
    
    
        if (scanResp.succeeded()) {
    
    

            // ---------
            // 获取块编码
            // ---------
            blockNum = scanResp.result().get(0).toString();

            // ------------
            // 获取key数据集
            // ------------
            Response keyArrResp = scanResp.result().get(1);
            if (null != keyArrResp) {
    
    
                paramList = keyArrResp.stream().map(Response::toString).collect(Collectors.toList());

                // ---------------------
                // 通过mget同时获取多个key
                // ---------------------
                redis.mget(paramList, this::databaseToInsert);
            }
        } else {
    
    
            LOGGER.error("func[SysUserBuriedService.redisScanToGet] Exception [{} - {}]",
                new Object[] {
    
    scanResp.cause(), scanResp.result()});
        }
    }

    /**
     * 
     * @MethodName: databaseToInsert
     * @Description: 数据库进行插入
     * @author yuanzhenhui
     * @param mgetResp
     *            void
     * @date 2022-08-10 06:21:09
     */
    private void databaseToInsert(AsyncResult<Response> mgetResp) {
    
    

        // ------------
        // 异步获取成功后
        // ------------
        if (mgetResp.succeeded()) {
    
    
            Response mgetArrResp = mgetResp.result();
            if (null != mgetArrResp) {
    
    
                List<SysUserBuried> puaList = mgetArrResp.stream()
                    .map(mget -> new SysUserBuried(new JsonObject(mget.toString()))).collect(Collectors.toList());

                // --------------------------
                // 将获取的集合进行批量插入数据库
                // --------------------------
                if (null != puaList && !puaList.isEmpty()) {
    
    
                    subDao.insert(puaList, true).whenCompleteAsync((opt, e) -> {
    
    
                        if (null == e) {
    
    
                            
                            // ---------------------------------------------
                            // 这里采用的是 unlink 的方式将 redis 中的数据给清除掉
                            // ---------------------------------------------
                            redis.unlink(paramList, this::redisUnlinkDelete);
                        } else {
    
    
                            LOGGER.error(
                                "func[SysUserBuriedService.databaseToInsert mysql inserted] Exception [{} - {}] stackTrace[{}] ",
                                new Object[] {
    
    e.getCause(), e.getMessage(), Arrays.deepToString(e.getStackTrace())});
                        }
                    });
                }
            }
        }
    }

    /**
     * 
     * @MethodName: redisUnlinkDelete
     * @Description: redis服务进行unlink删除
     * @author yuanzhenhui
     * @param unlinkResp
     *            void
     * @date 2022-08-10 06:19:07
     */
    private void redisUnlinkDelete(AsyncResult<Response> unlinkResp) {
    
    
        if (unlinkResp.failed()) {
    
    
            LOGGER.error("func[SysUserBuriedService.redisUnlinkDelete redis unlink key] Exception [{} - {}]",
                new Object[] {
    
    unlinkResp.cause(), unlinkResp.result()});
        } else {
    
    
            LOGGER.debug("func SysUserBuriedService.redisUnlinkDelete has end!! ");
        }
    }
}

From the above figure, we can know that to use JOOQ, we can inject it in the start method to get the corresponding dao method. As shown below:

@Override
public void start() {
    
    
    ...
    
    subDao = new SysUserBuriedDao(JOOQUtil.getInstance().getConfiguration(), vertx);

    ...
}

The specific method is very similar to redis, so I won't go into details here. Next, call the cron2SaveRedisAccess method through the timer. Remember the timer mentioned in the first quarter?

public static void main(String[] args) {
    
    
    BootstrapConfig.setupAndDeploy(vtx -> {
    
    
        LOGGER.info(" --------------- 定时器开始执行 --------------- ");
        SysUserBuriedService userAccess = new SysUserBuriedService();
        vtx.setPeriodic(CommonConstants.CRON, id -> userAccess.cron2SaveRedisAccess());
    });
}

In fact, it is to execute the cron2SaveRedisAccess method here. In the cron2SaveRedisAccess method, call the redis client to get the corresponding scanned data set by passing in the parameter set, as shown in the following figure:

public void cron2SaveRedisAccess() {
    
    
    LOGGER.debug("func SysUserBuriedService.cron2SaveRedisAccess has begin!! ");
    paramList = new ArrayList<>();
    paramList.add(blockNum);
    paramList.add("COUNT");
    paramList.add(String.valueOf(batchNum));
    redis.scan(paramList, this::redisScanToGet);
}

The scan method will have callback processing. In order to be able to read clearly, the split after the method is done here, and it will be handed over to the redisScanToGet method for processing. As shown below:

private void redisScanToGet(AsyncResult<Response> scanResp) {
    
    
    if (scanResp.succeeded()) {
    
    

        // ---------
        // 获取块编码
        // ---------
        blockNum = scanResp.result().get(0).toString();

        // ------------
        // 获取key数据集
        // ------------
        Response keyArrResp = scanResp.result().get(1);
        if (null != keyArrResp) {
    
    
            paramList = keyArrResp.stream().map(Response::toString).collect(Collectors.toList());

            // ---------------------
            // 通过mget同时获取多个key
            // ---------------------
            redis.mget(paramList, this::databaseToInsert);
        }
    } else {
    
    
        LOGGER.error("func[SysUserBuriedService.redisScanToGet] Exception [{} - {}]",
            new Object[] {
    
    scanResp.cause(), scanResp.result()});
    }
}

The method will judge according to the result of the callback. If the callback status shows "success", then firstly obtain the block code through scanResp.result(), and then obtain the key data set Response. If the Response data set is not empty, it traverses it and converts it into a List collection, and uses the mget method of redis to extract the data in redis twice with this List collection as a parameter. Similarly, the mget method also has callback processing, which will be disassembled here, and all the code will be added to the databaseToInsert method after disassembly. As shown below:

private void databaseToInsert(AsyncResult<Response> mgetResp) {
    
    

    // ------------
    // 异步获取成功后
    // ------------
    if (mgetResp.succeeded()) {
    
    
        Response mgetArrResp = mgetResp.result();
        if (null != mgetArrResp) {
    
    
            List<SysUserBuried> puaList = mgetArrResp.stream()
                .map(mget -> new SysUserBuried(new JsonObject(mget.toString()))).collect(Collectors.toList());

            // --------------------------
            // 将获取的集合进行批量插入数据库
            // --------------------------
            if (null != puaList && !puaList.isEmpty()) {
    
    
                subDao.insert(puaList, true).whenCompleteAsync((opt, e) -> {
    
    
                    if (null == e) {
    
    
                        
                        // ---------------------------------------------
                        // 这里采用的是 unlink 的方式将 redis 中的数据给清除掉
                        // ---------------------------------------------
                        redis.unlink(paramList, this::redisUnlinkDelete);
                    } else {
    
    
                        LOGGER.error(
                            "func[SysUserBuriedService.databaseToInsert mysql inserted] Exception [{} - {}] stackTrace[{}] ",
                            new Object[] {
    
    e.getCause(), e.getMessage(), Arrays.deepToString(e.getStackTrace())});
                    }
                });
            }
        }
    }
}

Similarly, after obtaining the processing result of mget, judge "success" first, and if successful, traverse the Response collection and rearrange it into a collection of SysUserBuried objects, and then save it to the database. Because JOOQ's dao has been injected in the start method, it can be used directly here

subDao.insert(puaList, true).whenCompleteAsync((opt, e)

Perform batch data insertion processing. Since JOOQ supports CompletableFuture for asynchronous processing, as shown in the figure below:

...
<dependency>
  <groupId>io.github.jklingsporn</groupId>
  <artifactId>vertx-jooq-completablefuture-jdbc</artifactId>
  <version>${jooq.rx.version}</version>
</dependency>
...

So you can call back through whenCompleteAsync. After the insertion is successful, the unlink processing of redis is completed. As for the following redisUnlinkDelete method, it is actually just a callback output, and there is no substantive operation.

So far, JOOQ has demonstrated all the database operations and redis operations. This simple demo has also achieved data closed-loop. More Vert.x content will be summarized later (although it has already been formed).

Guess you like

Origin blog.csdn.net/kida_yuan/article/details/131785323