"Microservices in Practice" Chapter 29 Distributed Transaction Framework Seata AT Mode

Series Article Directory

Chapter 30 Distributed transaction framework seata TCC mode
Chapter 29 Distributed transaction framework seata AT mode

insert image description here



foreword

This chapter introduces the transaction framework seata used in microservice distributed projects.
Official website: http://seata.io/zh-cn/
springcloud-nacos-seata: https://github.com/seata/seata-samples/tree/master/springcloud-nacos-seata

1. Concept

Seata is an open source distributed transaction solution dedicated to providing high-performance and easy-to-use distributed transaction services. Seata will provide users with AT, TCC, SAGA and XA transaction modes to create a one-stop distributed solution for users.
insert image description here

  • TC (Transaction Coordinator) - The transaction coordinator
    maintains the status of global and branch transactions, and drives global transaction commit or rollback.
  • TM (Transaction Manager) - Transaction Manager
    defines the scope of global transactions: start global transactions, commit or rollback global transactions.
  • RM (Resource Manager) - Resource Manager
    manages resources for branch transactions, talks to TC to register branch transactions and report the status of branch transactions, and drives branch transactions to commit or rollback.

2. AT mode

2.1. Application prerequisites

  • Based on a relational database that supports native ACID transactions.
  • Java application, accessing the database through JDBC.
  • File operations and relational database operations

2.2. The concept of two-phase commit

  • Phase 1: Business data and rollback log records are committed in the same local transaction, releasing local locks and connection resources.
  • Phase 2:
    Committing is asynchronous and completes very quickly.
    Rollbacks are reverse compensated through a one-stage rollback log.

2.3. Write isolation

  • Before committing a local transaction in the first phase, it is necessary to ensure that the global lock is obtained first.
  • If the global lock cannot be obtained, local transactions cannot be submitted.
  • The attempt to take the global lock is limited to a certain range. If it exceeds the range, it will give up and roll back the local transaction to release the local lock.
    insert image description here

Two global transactions tx1 and tx2 update the field m of table a respectively, and the initial value of m is 1000.
tx1 starts first, starts the local transaction, acquires the local lock, and updates m = 1000 - 100 = 900. Before the local transaction commits, first obtain the global lock of the record, and the local commit releases the local lock. Start after tx2, start the local transaction, get the local lock, update operation m = 900 - 100 = 800. Before the local transaction is committed, try to get the global lock of the record. Before tx1 is globally committed, the global lock of the record is held by tx1, and tx2 needs to retry and wait for the global lock.

tx1 two-phase global commit, releasing the global lock. tx2 gets the global lock and submits the local transaction.
insert image description here
If the two-stage global rollback of tx1, then tx1 needs to re-acquire the local lock of the data, and perform the update operation of reverse compensation to realize the rollback of the branch.

At this point, if tx2 is still waiting for the global lock on the data while holding the local lock, the branch rollback of tx1 will fail. The rollback of the branch will be retried until the global lock and other locks of tx2 time out, the global lock is given up and the local transaction is rolled back to release the local lock, and the branch rollback of tx1 is finally successful.
Because the global lock of the whole process is held by tx1 until the end of tx1, the problem of dirty writing will not occur.

2.4. Read isolation

Based on the database local transaction isolation level of Read Committed (Read Committed) or above, the default global isolation level of Seata (AT mode) is Read Uncommitted (Read Uncommitted).

If the application is in a specific scenario, it is necessary to require global read-committed. The current Seata method is through the proxy of the SELECT FOR UPDATE statement.
insert image description here
The execution of the SELECT FOR UPDATE statement will apply for a global lock, and if the global lock is held by another transaction, release the local lock (roll back the local execution of the SELECT FOR UPDATE statement) and try again. During this process, the query is blocked until the global lock is obtained, that is, the read related data has been submitted, and then it will not return.

For overall performance considerations, Seata's current solution does not proxy all SELECT statements, but only for FOR UPDATE SELECT statements.

2.5, the case of AT mode working process

Business table: product

Field Type Key
id bigint(20) AT
name varchar(100)
since varchar(100)

Business logic of AT branch transaction:

update product set name = 'GTS' where name = 'TXC';

2.5.1, the first stage

process:

Parse SQL: get SQL type (UPDATE), table (product), condition (where name = 'TXC') and other related information.
Mirroring before query: Generate query statements and locate data based on the conditional information obtained through analysis.

select id, name, since from product where name = 'TXC';

Get the front image:

id name since
1 TXC 2014

Execute business SQL: update the name of this record to 'GTS'.
Post-query mirroring: According to the result of pre-mirror, locate data by primary key.

select id, name, since from product where id = 1;

After getting the mirror image:

id name since
1 GTS 2014

Insert rollback log: Combine the front and back mirror data and business SQL-related information into a rollback log record and insert it into the
UNDO_LOG table.

{
    
    
    "branchId": 641789253,
    "undoItems": [{
    
    
        "afterImage": {
    
    
            "rows": [{
    
    
                "fields": [{
    
    
                    "name": "id",
                    "type": 4,
                    "value": 1
                }, {
    
    
                    "name": "name",
                    "type": 12,
                    "value": "GTS"
                }, {
    
    
                    "name": "since",
                    "type": 12,
                    "value": "2014"
                }]
            }],
            "tableName": "product"
        },
        "beforeImage": {
    
    
            "rows": [{
    
    
                "fields": [{
    
    
                    "name": "id",
                    "type": 4,
                    "value": 1
                }, {
    
    
                    "name": "name",
                    "type": 12,
                    "value": "TXC"
                }, {
    
    
                    "name": "since",
                    "type": 12,
                    "value": "2014"
                }]
                
            }],
            "tableName": "product"
        },
        "sqlType": "UPDATE"
    }],
    "xid": "xid:xxx"
}

Before submitting, register the branch with TC: Apply for the
global lock of the record whose primary key value is equal to 1 in the product table.

Local transaction submission: The update of business data is submitted together with the UNDO LOG generated in the previous steps.

Report the result of local transaction submission to TC.

2.5.2, Phase Two - Rollback

After receiving the branch rollback request from TC, start a local transaction and perform the following operations.

Find the corresponding UNDO LOG record through XID and Branch ID.

Data verification: Compare the back mirror in the UNDO LOG with the current data. If there is a difference, it means that the data has been modified by an action other than the current global transaction. In this case, it needs to be handled according to the configuration policy, and the detailed description is introduced in another document.

Generate and execute the rollback statement based on the pre-mirroring and business SQL information in the UNDO LOG:

update product set name = 'TXC' where id = 1;

Commit local transactions. And report the execution result of the local transaction (that is, the result of the rollback of the branch transaction) to the TC.

2.5.3, Phase 2 - Submit

After receiving the branch submission request from TC, put the request into an asynchronous task queue, and immediately return the result of successful submission to TC.

The branch submission request in the asynchronous task stage will delete the corresponding UNDO LOG records asynchronously and in batches.

3. Integration of spring cloud and seata

insert image description here

3.1, seata configuration

Download address: https://github.com/seata/seata/releases/download/v1.4.2/seata-server-1.4.2.zip
Unzip to D drive

3.1.1. Modify the configuration file

3.1.1.1, conf/file.conf changed to db mode

## transaction log store, only used in seata-server
store {
    
    
  ## store mode: file、db、redis
  mode = "db"
  ## rsa decryption public key
  publicKey = ""
  ## file store property
  file {
    
    
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  }

  ## database store property
  db {
    
    
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.jdbc.Driver"
    ## if using mysql to store the data, recommend add rewriteBatchedStatements=true in jdbc connection param
    url = "jdbc:mysql://127.0.0.1:3306/seata?rewriteBatchedStatements=true"
    user = "root"
    password = "root"
    minConn = 5
    maxConn = 100
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }

  ## redis store property
  redis {
    
    
    ## redis mode: single、sentinel
    mode = "single"
    ## single mode property
    single {
    
    
      host = "127.0.0.1"
      port = "6379"
    }
    ## sentinel mode property
    sentinel {
    
    
      masterName = ""
      ## such as "10.28.235.65:26379,10.28.235.65:26380,10.28.235.65:26381"
      sentinelHosts = ""
    }
    password = ""
    database = "0"
    minConn = 1
    maxConn = 10
    maxTotal = 100
    queryLimit = 100
  }
}

3.1.1.2、conf/registry.conf

registry {
    
    
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    
    
    application = "seata-server"
    serverAddr = "127.0.0.1:8848"
    group = "SEATA_GROUP"
    namespace = "1ff3782d-b62d-402f-8bc4-ebcf40254d0a"
    cluster = "default"
    username = "nacos"
    password = "nacos"
  }
  eureka {
    
    
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    
    
    serverAddr = "localhost:6379"
    db = 0
    password = ""
    cluster = "default"
    timeout = 0
  }
  zk {
    
    
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    
    
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
    aclToken = ""
  }
  etcd3 {
    
    
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    
    
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    
    
    name = "file.conf"
  }
}

config {
    
    
  # file、nacos 、apollo、zk、consul、etcd3
  type = "file"

  nacos {
    
    
    serverAddr = "127.0.0.1:8848"
    namespace = ""
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
    dataId = "seataServer.properties"
  }
  consul {
    
    
    serverAddr = "127.0.0.1:8500"
    aclToken = ""
  }
  apollo {
    
    
    appId = "seata-server"
    ## apolloConfigService will cover apolloMeta
    apolloMeta = "http://192.168.1.204:8801"
    apolloConfigService = "http://192.168.1.204:8080"
    namespace = "application"
    apolloAccesskeySecret = ""
    cluster = "seata"
  }
  zk {
    
    
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
    nodePath = "/seata/seata.properties"
  }
  etcd3 {
    
    
    serverAddr = "http://localhost:2379"
  }
  file {
    
    
    name = "file.conf"
  }
}

Note: The nacos namespace namespace is defined as its own.

3.1.1.3. Download config.tex

https://github.com/seata/seata/tree/develop/script/config-center/config.text Save to the root directory of seata, the file name is config.txt

#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none
#Transaction routing rules configuration, only for the client
service.vgroupMapping.default_tx_group=default
#If you use a registry, you can ignore it
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h
#Log rule configuration, for client and server
log.exceptionRate=100
#Transaction storage configuration, only for the server. The file, db, and redis configuration values are optional.
store.mode=db
store.lock.mode=db
store.session.mode=db
#Used for password encryption
store.publicKey=
#If store.mode,store.lock.mode,store.session.mode are not equal to file, you can remove the configuration block.
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
#These configurations are required if the store mode is db. If store.mode,store.lock.mode,store.session.mode are not equal to db, you can remove the configuration block.
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=root
store.db.password=root
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
#These configurations are required if the store mode is redis. If store.mode,store.lock.mode,store.session.mode are not equal to redis, you can remove the configuration block.
store.redis.mode=single
store.redis.single.host=127.0.0.1
store.redis.single.port=6379
store.redis.sentinel.masterName=
store.redis.sentinel.sentinelHosts=
store.redis.maxConn=10
store.redis.minConn=1
store.redis.maxTotal=100
store.redis.database=0
store.redis.password=
store.redis.queryLimit=100
#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
server.enableParallelRequestHandle=false
#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

3.1.1.4. Download nacos-config.sh to conf

Then execute the command: sh nacos-config.sh -h localhost -p 8848 -g SEATA_GROUP -t 1ff3782d-b62d-402f-8bc4-ebcf40254d0a -u nacos -w nacos
to synchronize the seata configuration to nacos
insert image description here

3.1.2. Import database

Create a database: create database seata;
then execute the following script.
https://github.com/seata/seata/blob/2.x/script/server/db/mysql.sql
global_table: global transaction table, whenever a global transaction is initiated, the ID of the global transaction will be recorded in the table
branch_table: branch transaction table, record the ID of each branch transaction, which database the branch transaction operates on, etc.
lock_table: global lock

3.1.3. Start the seata server

D:\seata\seata-server-1.4.2\bin\seata-server.bat

3.2. Use Cases

Business logic for users to purchase goods. The entire business logic is powered by 3 microservices:

  • Warehousing service: Deduct the warehouse quantity for a given commodity.
  • Order Service: Create orders based on purchasing requirements.
  • Account Services: Debit balance from user account.
    insert image description here
    solution:
    insert image description here

3.2.1, project configuration

3.2.1.1. Create four services

order-service (corresponding to the order database)
account-service (corresponding to the account database)
storage-service (corresponding to the storage database)
business-service

3.2.1.2. Add a rollback table to each business database

CREATE TABLE `undo_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) NOT NULL,
  `context` varchar(128) NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime NOT NULL,
  `log_modified` datetime NOT NULL,
  `ext` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

3.2.1.3, execute business table script

USE storage;
DROP TABLE IF EXISTS `storage_tbl`;
CREATE TABLE `storage_tbl` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `commodity_code` varchar(255) DEFAULT NULL,
  `count` int(11) DEFAULT 0,
  PRIMARY KEY (`id`),
  UNIQUE KEY (`commodity_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

USE order;
DROP TABLE IF EXISTS `order_tbl`;
CREATE TABLE `order_tbl` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `user_id` varchar(255) DEFAULT NULL,
  `commodity_code` varchar(255) DEFAULT NULL,
  `count` int(11) DEFAULT 0,
  `money` int(11) DEFAULT 0,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

USE account;
DROP TABLE IF EXISTS `account_tbl`;
CREATE TABLE `account_tbl` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `user_id` varchar(255) DEFAULT NULL,
  `money` int(11) DEFAULT 0,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

3.2.2, add dependencies in common-service

<!--seata 分布式事务-->
<dependency>
    <groupId>io.seata</groupId>
    <artifactId>seata-spring-boot-starter</artifactId>
    <version>1.4.2</version>
</dependency>
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
    <version>2021.0.4.0</version>
</dependency>

3.2.3, add configuration

seata:
  enabled: true
  enable-auto-data-source-proxy: false
  application-id: vforumc-user
  tx-service-group: default_tx_group
  service:
    vgroup-mapping:
      default_tx_group: default
    disable-global-transaction: false
  registry:
    type: nacos
    nacos:
      application: seata-server
      server-addr: 127.0.0.1:8848
      namespace: 1ff3782d-b62d-402f-8bc4-ebcf40254d0a
      group: SEATA_GROUP
      username: nacos
      password: nacos
  config:
    nacos:
      server-addr: 127.0.0.1:8848
      namespace: 1ff3782d-b62d-402f-8bc4-ebcf40254d0a
      group: SEATA_GROUP
      username: nacos
      password: nacos

3.2.4. Define database proxy

package com.xxxx.store.account.config;

import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;

@Data
@Configuration
public class DataSourceConfig {
    
    
    @Value("${spring.datasource.url}")
    private String url;

    @Value("${spring.datasource.username}")
    private String username;

    @Value("${spring.datasource.password}")
    private String password;

    @Value("${spring.datasource.driver-class-name}")
    private String driveClassName;
}
package com.xxxx.store.account.config;

import com.zaxxer.hikari.HikariDataSource;
import io.seata.rm.datasource.DataSourceProxy;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;

import javax.annotation.Resource;
import javax.sql.DataSource;

/**
 * 数据源代理
 */
@Configuration
public class DataSourceProxyConfig {
    
    
    @Resource
    private DataSourceConfig dataSourceConfig;

    @Bean("dataSource")
    public DataSource druidDataSource() {
    
    
        HikariDataSource hikariDataSource = new HikariDataSource();
        hikariDataSource.setUsername(dataSourceConfig.getUsername());
        hikariDataSource.setPassword(dataSourceConfig.getPassword());
        hikariDataSource.setJdbcUrl(dataSourceConfig.getUrl());
        hikariDataSource.setDriverClassName(dataSourceConfig.getDriveClassName());
        return hikariDataSource;
    }

    @Bean
    @Primary
    public DataSourceProxy dataSourceProxy(DataSource dataSource) {
    
    
        return new DataSourceProxy(dataSource);
    }
}

Add annotations to each business service startup class, remove the automatically configured data source, and let the above proxy data source take effect

@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)

Guess you like

Origin blog.csdn.net/s445320/article/details/131147965