Seata is used in the microservice project, and nacos is used as the configuration center, which is very detailed

1. Environment

nacos2.2.1
Ruoyi microservice version 3.6.2
seata1.6.1

2. Download and install seata

Friends who have already downloaded can skip this step. The version downloaded here is seata1.6.1.

1. Enter the official website of seata

address:

https://seata.io/zh-cn/index.html

2. Enter download

insert image description here

3. Click the download address

download link:

https://github.com/seata/seata

Downloading on github will be slower, you can download it from the web disk link of Ruoyi:

https://pan.baidu.com/s/1E9J52g6uW_VFWY34fHL6zA Extraction code: vneh

insert image description here

insert image description here

3. Switch nacos as the configuration center

After downloading seata, instead of double-clicking to start it like redis, it is over without reporting an error. If you want to switch to nacos as the configuration center and take effect in the microservice project, you need to do some configuration.

The downloaded compressed package is "seata-server-1.6.1.zip", and after decompression, there will be a folder named "seata".

1. Configure the yml file

Open the "seata-conf" directory
insert image description here

Open the "application.yml" file and also open the "application.example.yml" file for reference

insert image description here

Because we want to switch to nacos as the configuration center, just look at the "seata-config-nacos" and "seata-registry-nacos" of the "application.example.yml" file

insert image description here

After the final configuration is like this:

insert image description here

complete like this

server:
  port: 7091

spring:
  application:
    name: seata-server

logging:
  config: classpath:logback-spring.xml
  file:
    path: ${
    
    user.home}/logs/seata
  extend:
    logstash-appender:
      destination: 127.0.0.1:4560
    kafka-appender:
      bootstrap-servers: 127.0.0.1:9092
      topic: logback_to_logstash

console:
  user:
    username: seata
    password: seata

seata:
  config:
    # support: nacos, consul, apollo, zk, etcd3
    type: nacos
    nacos:
      server-addr: 127.0.0.1:8848  # nacos地址
      group: SEATA_GROUP  # 配置文件的分组
      username: nacos  # nacos用户名
      password: nacos  # nacos密码
      # 这是默认值
      data-id: seata.properties  # 配置文件的data id也就是配置文件名加后缀
  registry:
    # support: nacos, eureka, redis, zk, consul, etcd3, sofa
    type: nacos
    nacos:
      application: seata-server   #seata启动后在nacos的服务名
      server-addr: 127.0.0.1:8848  # nacos地址
      group: SEATA_GROUP  # 配置文件的分组
      cluster: default  # 这个歌参数在每个微服务seata时会用到
      username: nacos  # nacos用户名
      password: nacos  # nacos密码
  store:
    # support: file 、 db 、 redis
    mode: file
#  server:
#    service-port: 8091 #If not configured, the default is '${
    
    server.port} + 1000'
  security:
    secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    tokenValidityInMilliseconds: 1800000
    ignore:
      urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login

2. Add seata configuration in nacos

(1), new configuration

insert image description here

Then fill in the following data
insert image description here
Note:

Data ID is "seata-config-nacos-data-id" in "application.yml";
Group is "seata-config-nacos-group" in "application.yml";
configuration content is in the seata directory "seata- script-config-center" in the "config.txt" file.

Configuration content directory:

insert image description here

Find the configuration related to mysql, as long as this part is enough

insert image description here

This is the specific configuration content:

store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=username
store.db.password=password
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000

Remember to change the database, username and password to your own.

I am integrating it into the microservice version of Zoyi, and I will add some content. Mine is like this:

# 下面两行是在原来配置的基础上新增的
service.vgroupMapping.ruoyi-system-group=default
store.mode=db
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/cj-seata?useUnicode=true
store.db.user=root
store.db.password=123456
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000

(2), seata database related

The database is the database in the above configuration, which is related to seata, and the corresponding sql is provided in the downloaded seata.

sql corresponding directory:
insert image description here

First open mysql, create a new database, this is mine:

CREATE DATABASE `cj-seata`;

Then open the sql file and get the corresponding sql statement to execute. The official website also says that the seata mode requires the "undo_log" table, which is not in the sql file, but it is provided on the official website, and it is also added.

Finally, except for the establishment of the database, all execution sql is as follows:

-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
    `xid`                       VARCHAR(128) NOT NULL,
    `transaction_id`            BIGINT,
    `status`                    TINYINT      NOT NULL,
    `application_id`            VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name`          VARCHAR(128),
    `timeout`                   INT,
    `begin_time`                BIGINT,
    `application_data`          VARCHAR(2000),
    `gmt_create`                DATETIME,
    `gmt_modified`              DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),
    KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
    `branch_id`         BIGINT       NOT NULL,
    `xid`               VARCHAR(128) NOT NULL,
    `transaction_id`    BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id`       VARCHAR(256),
    `branch_type`       VARCHAR(8),
    `status`            TINYINT,
    `client_id`         VARCHAR(64),
    `application_data`  VARCHAR(2000),
    `gmt_create`        DATETIME(6),
    `gmt_modified`      DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
    `row_key`        VARCHAR(128) NOT NULL,
    `xid`            VARCHAR(128),
    `transaction_id` BIGINT,
    `branch_id`      BIGINT       NOT NULL,
    `resource_id`    VARCHAR(256),
    `table_name`     VARCHAR(32),
    `pk`             VARCHAR(36),
    `status`         TINYINT      NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
    `gmt_create`     DATETIME,
    `gmt_modified`   DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_status` (`status`),
    KEY `idx_branch_id` (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

CREATE TABLE IF NOT EXISTS `distributed_lock`
(
    `lock_key`       CHAR(20) NOT NULL,
    `lock_value`     VARCHAR(20) NOT NULL,
    `expire`         BIGINT,
    primary key (`lock_key`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);

-- 注意此处0.3.0+ 增加唯一索引 ux_undo_log
CREATE TABLE `undo_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) NOT NULL,
  `context` varchar(128) NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime NOT NULL,
  `log_modified` datetime NOT NULL,
  `ext` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

Mine is like this:
insert image description here

Note that the "undo_log" table should also be added to the business database, so don't forget it! ! !

insert image description here

3. Start seata

Enter the "seata-bin" directory, double-click "seata-server.bat", and a black window will pop up.
insert image description here

Then you can see the corresponding service in nacos

insert image description here

4. Use seata in microservices, take microservice version Ruoyi as an example

1. View the relevant manuals of Ruoyi

The address is as follows:

http://doc.ruoyi.vip/ruoyi-cloud/cloud/seata.html#%E5%9F%BA%E6%9C%AC%E4%BB%8B%E7%BB%8D

Here we take the system module "ruoyi-system" as an example.

First add seata-related dependencies in the pom file:

<!--分布式事务-->
<dependency>
    <groupId>com.ruoyi</groupId>
    <artifactId>ruoyi-common-seata</artifactId>
</dependency>

In the corresponding yml file, add seata-related configuration, and observe that the following configuration is added on the basis of the original yml:

# spring配置
spring: 
  datasource:
    dynamic:
      # 开启seata代理
      seata: true
      
# seata配置
seata:
  enabled: true
  # Seata 应用编号,默认为 ${spring.application.name}
  application-id: ${
    
    spring.application.name}
  # Seata 事务组编号,用于 TC 集群名
  tx-service-group: ${
    
    spring.application.name}-group
  # 关闭自动代理
  enable-auto-data-source-proxy: false
  # 服务配置项
  service:
    # 虚拟组和分组的映射
    vgroup-mapping:
      ruoyi-system-group: default
    # 分组和 Seata 服务的映射
    grouplist:
      default: 127.0.0.1:8091
  config:
    type: file
  registry:
    type: file

My complete "ruoyi-system-dev.yml" looks like this:

# spring配置
spring:
  redis:
    host: localhost
    port: 6379
    password:
  datasource:
    druid:
      stat-view-servlet:
        enabled: true
        loginUsername: admin
        loginPassword: 123456
    dynamic:
      druid:
        initial-size: 5
        min-idle: 5
        maxActive: 20
        maxWait: 60000
        timeBetweenEvictionRunsMillis: 60000
        minEvictableIdleTimeMillis: 300000
        validationQuery: SELECT 1 FROM DUAL
        testWhileIdle: true
        testOnBorrow: false
        testOnReturn: false
        poolPreparedStatements: true
        maxPoolPreparedStatementPerConnectionSize: 20
        filters: stat,slf4j
        connectionProperties: druid.stat.mergeSql\=true;druid.stat.slowSqlMillis\=5000
      datasource:
          # 主库数据源
          master:
            driver-class-name: com.mysql.cj.jdbc.Driver
            url: jdbc:mysql://localhost:3306/cj-cloud?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
            username: root
            password: 123456
          # 从库数据源
          # slave:
            # username: 
            # password: 
            # url: 
            # driver-class-name: 
      # 开启seata代理
      seata: true

# seata配置
seata:
  enabled: true
  # Seata 应用编号,默认为 ${spring.application.name}
  application-id: ${
    
    spring.application.name}
  # Seata 事务组编号,用于 TC 集群名
  tx-service-group: ${
    
    spring.application.name}-group
  # 关闭自动代理
  enable-auto-data-source-proxy: false
  # 服务配置项
  service:
    # 虚拟组和分组的映射
    vgroup-mapping:
      ruoyi-system-group: default
    # 分组和 Seata 服务的映射
    # grouplist:  # 仅注册中心为file时使用
      # default: 127.0.0.1:8091
  config:
    type: nacos
    nacos:
      serverAddr: 127.0.0.1:8848
      group: SEATA_GROUP
      namespace:
  registry:
    type: nacos
    nacos:
      server-addr: 127.0.0.1:8848
      namespace:
      #可选
      username: nacos
      #可选
      password: nacos
      #可选
      application: seata-server
      #默认值和 config 的 SEATA_GROUP 不一样 
      group: SEATA_GROUP
      # 可选  默认
      # cluster: default 

# mybatis配置
mybatis:
    # 搜索指定包别名
    typeAliasesPackage: com.ruoyi.system
    # 配置mapper的扫描,找到所有的mapper.xml映射文件
    mapperLocations: classpath:mapper/**/*.xml
    configuration:
      log-impl: org.apache.ibatis.logging.stdout.StdOutImpl

# swagger配置
swagger:
  title: 系统模块接口文档
  license: Powered By ruoyi
  licenseUrl: https://ruoyi.vip

2. Restart the system module

Startup reports an error:

can not get cluster name in registry config ‘service.vgroupMapping.default_tx_group’, please make sure registry config correct

insert image description here
This is because "service.vgroupMapping.default_tx_group" is the name of the transaction group, and the value here needs to be consistent with the value of "seata.tx-service-group" in the TC configuration (that is, the yml file).

in:

1. "tx-service-group" is a custom name, which can be chosen at will, and the name provided by Ruoyi is used here.
2. Do not use the underscore " " in the name of the transaction group, you can use "-", because using " " in the higher version of seata will cause service not to be found.

Solution:

Add a configuration with Data ID "service.vgroupMapping.default_tx_group" in nacos

insert image description here

According to the previous configuration file, "group" is "SEATA_GROUP"; first write the configuration content casually, here write a "test".

Restart again, still report an error, the error is as follows:

no available service found in cluster ‘default’, please make sure registry config correct and keep your seata server running

This is caused by the inconsistency between the seata client and server configurations.

Solution:

According to the yml file of seata, modify the configuration content of "service.vgroupMapping.default_tx_group", and restart the problem again.

We already know that the value of "seata-registry-nacos-cluster" defined in seata's yml is "default"

insert image description here

Modify the configuration content of "service.vgroupMapping.default_tx_group" to "default"

insert image description here

restart again

insert image description here
success!

Look at the seata server again:
insert image description here
you can see that it has been successfully registered on the seata server!

5. Some pitfalls in the process of using seata (you can ignore them)

My business is like this. To delete doctor data, delete the corresponding account and associated role at the same time. Deleting doctor data is the main business, and deleting doctor account and role is the called service.

Many people should have encountered this pitfall, that is, when the service is called, the main business rolls back normally, but the called service does not roll back. Here, when an error is reported for deleting doctor data, the business related to doctor data is rolled back normally, but the rollback of deleting accounts and roles fails.

This is my main business. In the project service, delete the doctor information business:

insert image description here

An exception is thrown at the end of the method, and the test is rolled back.

The logic of deleting accounts and roles is provided by Ruoyi, and it is called during the testing process. This is to delete accounts:

insert image description here

Here's what removes the role:

insert image description here

question:

If an error occurs when deleting doctor data, the corresponding doctor data business will be rolled back, but deleting accounts and roles will not be rolled back, and there is no new data in the "global_table" table and "undo_log" table of the seata database.

Solution one:

It is said on the Internet that the XID of the main business and the sub-business may be inconsistent.

Check the console, the XID that appears when deleting the doctor data

insert image description here

Delete the XID corresponding to the account and role
insert image description here

It is observed that the XIDs of all services are the same.

Check the seata console to see if there is a successful rollback

insert image description here
Print rollback succeeded. Apparently mine is not an XID problem.

Solution two:

Refer to this old man's blog

The "undo_log" table auto-increment primary key problem, remove the auto-increment primary key.

So I removed the auto-increment primary key of the "undo_log" table of the business database and seata database, and tested again.

XID to delete doctor data

insert image description here
insert image description here

The "undo_log" table still has no data, but the "global_table" table has new data, and the XID is the XID corresponding to the deleted doctor data, but the sub-business is still not rolled back.

But the old man said in the comment area that it was solved, and it shouldn't be up to me.

I compared the seata configuration in the yml file from beginning to end, and found that I did not add the following configuration:

insert image description here

After adding it, restart seata and microservices, and then test again, and report an error like this:

Field 'id' doesn't have a default value\n; java.sql.SQLException: Field 'id' doesn't have a default value; nested exception is java.sql.SQLException: java.sql.SQLException: Field 'id' doesn't have a default value",

It is true that the self-incrementing primary key of the "undo_log" table has been removed, so go back and check it. Test again:

Main business rollback

insert image description here

The called service rolls back

insert image description here

Check the database, this time not only the doctor data is rolled back, but also the corresponding account and role are rolled back.

This also proves that seata did add data to the "undo_log" table during transaction processing, but why can't I see the data in the "undo_log" table after execution?

This appears in the console:

xid 192.168.1.12:8091:6278425795117363335 branch 6278425795117363338, undo_log deleted with GlobalFinished

This tells us that the undo_log log is deleted after the rollback transaction is completed. The undo_log table is a table used to roll back transactions. If you look at it after the program is executed, it is usually an empty table. Therefore, if you want to view the data in the table, you must check it at a break point between the operation of the database and the occurrence of an exception.

Finally, it came out, and I felt that seata was really unfriendly to Xiaobai.

Guess you like

Origin blog.csdn.net/studio_1/article/details/131432247