Getting seata 1.1.0 Introduction

 

1.SeaTa Profile

SeaTa Ali for commercial open source framework for distributed transactions predecessor Fescar, java program

 

1.1 Highlights

  • The application layer based on SQL parsing achieve automatic compensation to reduce the maximum service invasive;
  • The distributed transaction TC (transaction coordinator) deployed independently take charge of registration, rolled back;
  • Read and write isolation achieved by isolating the global lock
  • A variety of transaction mode: AT, TCC, SAGA transaction mode

 

1.2 SeaTa related concepts

  • TC : Transaction Coordinator, maintain global transactions running, is responsible for coordinating and driving the global transaction is committed or rolled back.
  • TM : border control global affairs, is responsible for opening a global transaction, and eventually launch a global resolution to commit or roll back global.
  • RM : Control affairs branch, the branch is responsible for registration, status reports, and receive instruction affairs coordinator, commit and rollback drive branch (local) transaction.
  • TC (Server-side) to deploy a separate server, TM and RM (Client side) integrated by the service system.

 

1.3 execution process

 

  1. TM TC to request the opening of a global transaction, TC returned after creating a globally unique global transaction XID, XID will spread in the context of global affairs;
  2. RM registered branch transaction to the TC, the branch attributable to the transaction with the same XID global affairs;
  3. TM to initiate a global TC committed or rolled back;
  4. TC branch services under complete schedule XID committed or rolled back

 

1.4 Detailed parameters

https://seata.io/zh-cn/docs/user/configurations.html

 

1.5 Performance loss

  • An Update of SQL, you need to get a global transaction xid (Communications and TC)
  • before image (parse SQL, a database query)
  • after image (query a database)
  • insert undo log (write once database)
  • before commit (and TC Communications, judging lock conflicts)

These operations require a long-distance communication RPC, and are synchronized. Further insertion blob field performance when writing undo log is not high. Each write SQL will increase so much overhead, a rough estimate of the additional five times the response time (two-phase asynchronous though, but in fact will take up system resources, network, thread, database)

 

 

 

2. Preparation

 

1.server end work

 

1.TC (Seate-server) Download

2. The construction of the table

  • TC need to create three tables
    • db_store.sql
    • Table corresponding to the respective functions
      • Global Transaction --- global_table
      • Branch transactions --- branch_table
      • Global Lock ----- lock_table

 

3. Modify the configuration file

  • Logging way seata / conf / file.conf server services, database connection information
## transaction log store, only used in seata-server
store {
  ## store mode: file、db 事务日志存储模式
  mode = "db"   
# 服务端配置
 service {
# 分组名称 需要和client端一致 chuangqi-steata
  vgroup_mapping.chuangqi-steata = "chuangqi-steata"
  chuangqi-steata.grouplist = "127.0.0.1:8091"
# 降级开关 默认关闭
  enableDegrade = false
  disable = false
  max.commit.retry.timeout = "-1"
  max.rollback.retry.timeout = "-1"
}
  ## file store property 
  file {
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  }




  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
    datasource = "dbcp"
    ## mysql/oracle/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.jdbc.Driver"
    url = "jdbc:mysql://120.26.233.25:6789/seata"
    user = "root"
    password = "Test@DB123"
    minConn = 1
    maxConn = 10
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
  

 

  • seata / conf / registry.conf different registration centers and local distribution center file stand-alone version
registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "file"




  nacos {
    serverAddr = "localhost"
    namespace = ""
    cluster = "default"
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = "0"
  }
  zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
  }
  consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}




config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "file"




  nacos {
    serverAddr = "localhost"
    namespace = ""
    group = "SEATA_GROUP"
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    app.id = "seata-server"
    apollo.meta = "http://192.168.1.204:8801"
    namespace = "application"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  

 

4. Start

seata/bin/seata-server.sh

nohup sh seata-server.sh -h xx.xx.xx.xx -p 8091 -m db -n 1 -e test &
-h: 注册到注册中心的ip
-p: Server rpc 监听端口
-m: 全局事务会话信息存储模式,file、db,优先读取启动参数
-n: Server node,多个Server时,需区分各自节点,用于生成不同区间的transactionId,以免冲突
-e: 多环境配置参考 http://seata.io/en-us/docs/ops/multi-configuration-isolation.html

 

 

 

2.client end work

 

2.1 Project add-dependent (radio) 

  • Seata-all rely on more manual configuration
  • Dependence seata-spring-boot-starter, configuration support yml
  • Dependent spring-cloud-alibaba-seata, internal integration seata, and implements transmission xid
  • client and server-side version of the same version

 

 

Table 2.2 Project New undo_log

db_undo_log.sql

 

Increased 2.3 configuration file (version 1.1.0)

  • yml file
seata:
  enabled: true
  application-id: account-api  # 项目标识
  tx-service-group: chuangqi-steat # seata分组名称
  enable-auto-data-source-proxy: true # 开启数据源自动代理
  use-jdk-proxy: false # 使用的代理方式
  client:
    rm:
      async-commit-buffer-limit: 1000
      report-retry-count: 5
      table-meta-check-enable: false
      report-success-enable: false
      lock:
        retry-interval: 10
        retry-times: 30
        retry-policy-branch-rollback-on-conflict: true
    tm:
      commit-retry-count: 5
      rollback-retry-count: 5
    undo:
      data-validation: true
      log-serialization: jackson
      log-table: undo_log
    log:
      exceptionRate: 100
  service:
    vgroup-mapping:
      my_test_tx_group: default
    grouplist:
      default: 127.0.0.1:8091
    #enable-degrade: false
    #disable-global-transaction: false
  transport:
    shutdown:
      wait: 3
    thread-factory:
      boss-thread-prefix: NettyBoss
      worker-thread-prefix: NettyServerNIOWorker
      server-executor-thread-prefix: NettyServerBizHandler
      share-boss-worker: false
      client-selector-thread-prefix: NettyClientSelector
      client-selector-thread-size: 1
      client-worker-thread-prefix: NettyClientWorkerThread
      worker-thread-size: default
      boss-thread-size: 1
    type: TCP
    server: NIO
    heartbeat: true
    serialization: seata
    compressor: none
    enable-client-batch-send-request: true
  config:
    type: file
    consul:
      server-addr: 127.0.0.1:8500
    apollo:
      apollo-meta: http://192.168.1.204:8801
      app-id: seata-server
      namespace: application
    etcd3:
      server-addr: http://localhost:2379
    nacos:
      namespace:
      serverAddr: localhost
      group: SEATA_GROUP
    zk:
      server-addr: 127.0.0.1:2181
      session-timeout: 6000
      connect-timeout: 2000
      username: ""
      password: ""
  registry:
    type: file
    consul:
      cluster: default
      server-addr: 127.0.0.1:8500
    etcd3:
      cluster: default
      serverAddr: http://localhost:2379
    eureka:
      application: default
      weight: 1
      service-url: http://localhost:8761/eureka
    nacos:
      cluster: default
      server-addr: localhost
      namespace:
    redis:
      server-addr: localhost:6379
      db: 0
      password:
      cluster: default
      timeout: 0
    sofa:
      server-addr: 127.0.0.1:9603
      application: default
      region: DEFAULT_ZONE
      datacenter: DefaultDataCenter
      cluster: default
      group: SEATA_GROUP
      addressWaitTime: 3000
    zk:
      cluster: default
      server-addr: 127.0.0.1:2181
      session-timeout: 6000
      connect-timeout: 2000
      username: ""
      password: ""

 

2.4 Open Data Source Agent

  • Close spring automatic proxy
@EnableAutoConfiguration(exclude = DataSourceAutoConfiguration.class)
  • Injection data source
@Bean
@Autowired
public SqlSessionFactory sqlsessionfactory(HikariDataSource dataSource, Configuration configuration) throws Exception {
    SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
    sqlSessionFactoryBean.setDataSource(dataSource);


    sqlSessionFactoryBean.setPlugins(new Interceptor[]{new PageInterceptor()});
    sqlSessionFactoryBean.setConfiguration(configuration);
    return sqlSessionFactoryBean.getObject();
}


@Bean
@ConfigurationProperties(prefix = "spring.datasource.hikari")
public HikariDataSource dataSource() {
   return DataSourceBuilder.create().type(HikariDataSource.class).build();
}
    @Primary
    @Bean("dataSource")
    public DataSourceProxy dataSource(DataSource druidDataSource){
        return new DataSourceProxy(druidDataSource);
    }

 

2.4 Initialization GlobalTransactionScanner

  • Manually
       public GlobalTransactionScanner globalTransactionScanner() {
           String applicationName = this.applicationContext.getEnvironment().getProperty("spring.application.name");
           String txServiceGroup = this.seataProperties.getTxServiceGroup();
           if (StringUtils.isEmpty(txServiceGroup)) {
               txServiceGroup = applicationName + "-fescar-service-group";
               this.seataProperties.setTxServiceGroup(txServiceGroup);
           }
   
           return new GlobalTransactionScanner(applicationName, txServiceGroup);
       }
  • Automatic introduction seata-spring-boot-starter, spring-cloud-alibaba-seata like jar

 

 

 

2.5 Configuring XID transfer based on project architecture

  • Various rpc manual reference source folder achieve integration module
  • The user can automatically springCloud incorporated spring-cloud-alibaba-seata, internal transfer has been achieved xid

 

 

2.6

  • 1. @ GlobalTransaction global transaction annotation project to implement a distributed transaction interface is added
  • 2. @ GlobalLock prevent use dirty read and write dirty, they do not want included in the global transaction management. (Rpc not required and other costs and delivery xid)

 

3. Notes

 

1.server end

  • file mode is a stand-alone mode, global transaction session information in memory read and write local files and persistent root.data, high performance;
  • db availability mode is mode information global transaction session sharing db, the corresponding properties of hearing
  • file mode can be started directly without any redundant configuration

 

2.client end

  • When restTemplate RPC call: SeataFilter, SeataRestTemplateAutoConfiguration need to manage spring, remember to add the scan package path

 

 

4. The respective modes Introduction

 

1.AT mode

 

1.0 Introduction

  • Simple to use
  • Two-phase commit protocol

 

1.1 precondition

  • Support for ACID transactions based on local relational database.
  • Java applications access the database via JDBC

 

1.2 implementation

  • Two-phase commit protocol evolution, based on the overall local affairs + global + local lock locks for reverse compensation rollback by undo_log
  • Stage: business data logging and rollback submitted in the same local transaction, the lock is released and the local connection resources.
  • Two stages:
    • Submit Asynchronized, very quickly completed.
    • Rollback reverse compensated by a rolling log phase.

 

 

Step 1.3

  • A first phase lock is turned on to get local affairs and then started walking business logic
    • Local lock acquisition failed keep retrying
  • Before one-phase commit, first try to get this record of global lock
    • Failure to obtain a global lock can not commit a local transaction, the retry of acquiring global lock (default 10ms get once, retry 30 times)
      • client.rm.lock.retryInterval global lock check or occupied default retry interval 10 ms units
      • client.rm.lock.retryTimes verify or re-occupy the global default frequency and the lock 30 again
      • After the retry is not acquired by Global Lock, local transaction is rolled back, the lock is released locally
    • Gets a successful implementation phase transaction
  • Two-stage transaction execution
    • commit: Direct execution
    • rollback: first re-acquire the local lock which records, and then performs a reverse operation to achieve the compensation rollback

 

 

1.4 write dirty, dirty read problem

1. Write isolation

  • The above case, before the two-phase commit tx1 tx1 tx2 can get to the data submitted yet global, tx2 commit the transaction will write dirty
    • Solution:
    • tx1 two stages have been submitted before the global lock , you need to get rolling back local lock ,
    • tx2 has a local lock, before one-phase commit needs to acquire the global lock
    • tx1 rollback acquire local locks will continue to retry, then the tx2 will acquire the global lock timeout
    • tx2 timeout -> roll back the local transaction, the lock is released locally
    • tx1 get to the local lock -> transaction rollback

2. Read isolation

  • The above case, tx2 if you can read the data in the global tx1 not yet submitted
    • The default global sense has been submitted (Read Committed) or more on the basis of, Seata (AT mode) in the local database transaction isolation level isolation level is uncommitted read (Read Uncommitted).
    • Achieved by SELECT FOR UPDATE
    • SELECT FOR UPDATE will go to apply for Global Lock
      • Acquisition failure retry until the global lock acquisition to acquire
  • Performance exertion
  • @GlobalLock dirty reads phantom read annotation solve the problem (does not generate undo_log)

 

Works 1.5

# 分支事务的sql
update product set name = 'GTS' where name = 'TXC';

A stage

  • Parsing sql, image data obtained by the query condition ago
    • select id, name, since from product where name = 'TXC'; obtained before the query image
  • Sql execution
  • After performing image acquisition, a front mirror query result, by the primary key location query image
    • select id, name, since from product where id = 1`;
  • The front mirror data and service information such as the composition of a sql rollback record into the undo_log
{
	"branchId": 641789253,
	"undoItems": [{
		"afterImage": {
			"rows": [{
				"fields": [{
					"name": "id",
					"type": 4,
					"value": 1
				}, {
					"name": "name",
					"type": 12,
					"value": "GTS"
				}, {
					"name": "since",
					"type": 12,
					"value": "2014"
				}]
			}],
			"tableName": "product"
		},
		"beforeImage": {
			"rows": [{
				"fields": [{
					"name": "id",
					"type": 4,
					"value": 1
				}, {
					"name": "name",
					"type": 12,
					"value": "TXC"
				}, {
					"name": "since",
					"type": 12,
					"value": "2014"
				}]
			}],
			"tableName": "product"
		},
		"sqlType": "UPDATE"
	}],
	"xid": "xid:xxx"
}
  • Before submitting the registration branch TC: application product table, the primary key value equal to the global lock 1 is recorded.
  • Local transaction commit: update and steps in front of business data generated UNDO LOG submit.
  • The results are reported to the local transaction commit TC.

 

The second stage - rollback

  • TC branch received rollback request to open a local transaction, perform the following operations.
  • Find the corresponding UNDO LOG records by XID and Branch ID.
  • Data Verification: Take UNDO LOG in the mirror compared with the current data, if different, indicating that the data is action outside the current global transaction has been modified. This case, it is necessary to do processing according to the configuration policy
  • Generate and execute a rollback statements based on information before the SQL mirroring and business in the UNDO LOG:
update product set name = 'TXC' where id = 1;
  • Submit local transaction. And the results of the implementation of a local transaction (ie branch transaction rollback results) reported to the TC.

 

Two stages - submission

  • TC received a branch of submitting a request, the request asynchronous tasks into a queue and immediately returns submitted a successful outcome to the TC.
  • Phase asynchronous tasks branch of submitting a request to the asynchronous and batch delete UNDO LOG record.

 

1.6 Features

  • Transformation of low cost, cloud project basically just need to add a profile, add a comment
  • Ordinary springboot project - adding profiles to achieve XID transfer
  • Isolation

 

2.TCC mode

 

2.0 Introduction

  • Two-phase commit protocol

 

2.1 precondition

  • Each branch transaction requires
    • prepare method (local commit) stage
    • Two-stage commit or rollback methods
  • TCC mode does not depend on the underlying data sources transaction support
    • A prepare phase behavior: Calling prepare custom logic.
    • Two-phase commit acts: call commit custom logic.
    • Two-stage rollback behavior: custom calling rollback logic.
  • TCC is actually customize local affairs to join the global transaction management

 

2.2 Features

  • Good performance, no extra operation, only the TC management
  • Isolation
  • Changes big business, difficulties in developing

 

 

3.Saga mode

 

3.0 Introduction

  • Long transaction solutions provided SEATA
  • Business process each participant to submit local affairs
  • When there is a failure of a participant of the compensation has already been successful participants
  • A stage and two-stage compensation for services forward by business development services to achieve

 

3.1 precondition

  • Business development stage and two-stage forward compensation business

 

3.2 Features

  • For long transactions
  • Participants include other service companies or legacy systems can not provide TCC mode requires three interfaces
  • Phase commit a local transaction, lock-free, high-performance
  • Event-driven architecture, participants can execute asynchronously, high throughput
  • Does not guarantee isolation

 

3.3 realized

 

Published 15 original articles · won praise 21 · views 30000 +

Guess you like

Origin blog.csdn.net/q690080900/article/details/104944123