SpringCloud Alibaba Seata is collected

1. Introduction

Official website address: http://seata.io/zh-cn/

1. Concept

Seata is an open source distributed transaction solution, dedicated to providing high-performance and simple distributed transaction services in the microservice architecture.

2. Processing

Transaction ID XID: globally unique transaction ID

Transaction Coordinator (TC): Maintain the status of global and branch transactions, and drive global transaction commit or rollback.

Transaction Manager™: Define the scope of global transactions: start global transactions, commit or roll back global transactions.

Resource Manager (RM): Manage resources for branch transaction processing, talk to TC to register branch transactions and report the status of branch transactions, and drive branch transaction commit or rollback.

image

  1. TM applies to TC to start a global transaction, the global transaction is successfully created and a globally unique XID is generated
  2. XID propagates in the context of the microservice call link
  3. RM registers branch transactions with TC and includes them under the jurisdiction of XID corresponding global transactions
  4. TM initiates a global submission or rollback resolution for XID to TC
  5. TC schedules all branch transactions under the jurisdiction of XID to complete submission or rollback requests

Second, the installation of Seata-Server

1. Download

http://seata.io/zh-cn/blog/download.html Choose the specified version to download (I use 0.9.0 here)

2. Modify the configuration file

Modify seata/conf/file.conf

#将service中修改group
vgroup_mapping.my_test_tx_group = "my_group"
#将store模块修改为db并修改数据连接,将conf目录下的db_store.sql文件导入到数据库中
mode = "db"
db {
    datasource = "dbcp"
    db-type = "mysql"
    driver-class-name = "com.mysql.jdbc.Driver"
    url = "jdbc:mysql://127.0.0.1:3306/seata"
    user = "root"
    password = "123456"
}

Modify seata/conf/registry.conf

registry {
  type = "nacos"
  nacos {
    serverAddr = "localhost:8848"
    namespace = ""
    cluster = "default"
 }

Three, Seata application

1. Order service

Source code: seata-order-service2001

a, configure pom

<!--nacos-->
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<!--seata-->
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
    <exclusions>
        <exclusion>
            <artifactId>seata-all</artifactId>
            <groupId>io.seata</groupId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>io.seata</groupId>
    <artifactId>seata-all</artifactId>
    <version>0.9.0</version>
</dependency>
<!--feign-->
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<!--web-actuator-->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!--mysql-druid-->
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.37</version>
</dependency>
<dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid-spring-boot-starter</artifactId>
    <version>1.1.10</version>
</dependency>
<dependency>
    <groupId>org.mybatis.spring.boot</groupId>
    <artifactId>mybatis-spring-boot-starter</artifactId>
    <version>2.0.0</version>
</dependency>

View Code

b, configure yaml

server:
  port: 2001

spring:
  application:
    name: seata-order-service
  cloud:
    alibaba:
      seata:
        #自定义事务组名称需要与seata-server中的对应
        tx-service-group: my_group
    nacos:
      discovery:
        server-addr: localhost:8848
  datasource:
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/seata_order
    username: root
    password: 123456

feign:
  hystrix:
    enabled: false

logging:
  level:
    io:
      seata: info

mybatis:
  mapperLocations: classpath:mapper/*.xml

View Code

c, add file.conf (same as the configuration of seata-server)

transport {
  # tcp udt unix-domain-socket
  type = "TCP"
  #NIO NATIVE
  server = "NIO"
  #enable heartbeat
  heartbeat = true
  #thread factory for netty
  thread-factory {
    boss-thread-prefix = "NettyBoss"
    worker-thread-prefix = "NettyServerNIOWorker"
    server-executor-thread-prefix = "NettyServerBizHandler"
    share-boss-worker = false
    client-selector-thread-prefix = "NettyClientSelector"
    client-selector-thread-size = 1
    client-worker-thread-prefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    boss-thread-size = 1
    #auto default pin or 8
    worker-thread-size = 8
  }
  shutdown {
    # when destroy server, wait seconds
    wait = 3
  }
  serialization = "seata"
  compressor = "none"
}

service {

  vgroup_mapping.my_group = "default"

  default.grouplist = "127.0.0.1:8091"
  enableDegrade = false
  disable = false
  max.commit.retry.timeout = "-1"
  max.rollback.retry.timeout = "-1"
  disableGlobalTransaction = false
}

client {
  async.commit.buffer.limit = 10000
  lock {
    retry.internal = 10
    retry.times = 30
  }
  report.retry.count = 5
  tm.commit.retry.count = 1
  tm.rollback.retry.count = 1
}

## transaction log store
store {
  ## store mode: file、db
  mode = "db"

  ## file store
  file {
    dir = "sessionStore"

    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    max-branch-session-size = 16384
    # globe session size , if exceeded throws exceptions
    max-global-session-size = 512
    # file buffer size , if exceeded allocate new buffer
    file-write-buffer-cache-size = 16384
    # when recover batch read size
    session.reload.read_size = 100
    # async, sync
    flush-disk-mode = async
  }

  ## database store
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
    datasource = "dbcp"
    ## mysql/oracle/h2/oceanbase etc.
    db-type = "mysql"
    driver-class-name = "com.mysql.jdbc.Driver"
    url = "jdbc:mysql://127.0.0.1:3306/seata"
    user = "root"
    password = "123456"
    min-conn = 1
    max-conn = 3
    global.table = "global_table"
    branch.table = "branch_table"
    lock-table = "lock_table"
    query-limit = 100
  }
}
lock {
  ## the lock store mode: local、remote
  mode = "remote"

  local {
    ## store locks in user's database
  }

  remote {
    ## store locks in the seata's server
  }
}
recovery {
  #schedule committing retry period in milliseconds
  committing-retry-period = 1000
  #schedule asyn committing retry period in milliseconds
  asyn-committing-retry-period = 1000
  #schedule rollbacking retry period in milliseconds
  rollbacking-retry-period = 1000
  #schedule timeout retry period in milliseconds
  timeout-retry-period = 1000
}

transaction {
  undo.data.validation = true
  undo.log.serialization = "jackson"
  undo.log.save.days = 7
  #schedule delete expired undo_log in milliseconds
  undo.log.delete.period = 86400000
  undo.log.table = "undo_log"
}

## metrics settings
metrics {
  enabled = false
  registry-type = "compact"
  # multi exporters use comma divided
  exporter-list = "prometheus"
  exporter-prometheus-port = 9898
}

support {
  ## spring
  spring {
    # auto proxy the DataSource bean
    datasource.autoproxy = false
  }
}

View Code

d, add registry.conf (the same as the configuration of seata-server)

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
    serverAddr = "localhost:8848"
    namespace = ""
    cluster = "default"
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = "0"
  }
  zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
  }
  consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "file"

  nacos {
    serverAddr = "localhost"
    namespace = ""
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    app.id = "seata-server"
    apollo.meta = "http://192.168.1.204:8801"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    session.timeout = 6000
    connect.timeout = 2000
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}

View Code

e, Fegin call (here one of the accounts is taken as an example)

@FeignClient(value = "seata-account-service")
public interface AccountService {

    @RequestMapping("/account/decrease")
    public CommonResult decrease(@RequestParam("userId") Long userId, @RequestParam("money") BigDecimal money);

}

View Code

f, transaction service

@Slf4j
@Service
public class OrderServiceImpl implements OrderService {

    @Autowired
    OrderDao orderDao;
    @Autowired
    AccountService accountService;
    @Autowired
    StorageService storageService;

    @Override
    @GlobalTransactional(name = "my-order-test",rollbackFor = Exception.class) //加注解使用全局的事务,name 为事务名称不重复就行
    public Long create(Order order) {
        log.info("=========================下订单,开始");
        orderDao.create(order);
        log.info("=========================下订单,完成");

        log.info("=========================减库存,开始");
        storageService.decrease(order.getProductId(), order.getCount());
        log.info("=========================减库存,完成");

        log.info("=========================减积分,开始");
        accountService.decrease(order.getUserId(), order.getMoney());
        log.info("=========================减积分,完成");

        log.info("=========================订单状态修改,开始");
        orderDao.update(order.getId(),1);
        log.info("=========================订单状态修改,完成");

        return order.getId();
    }
}

View Code

g, start class

View Code

2. Inventory service

Source code: seata-storage-service2002

Same as the configuration steps of a, b, c, d, g in order service

3. Account service

Source code: seata-account-service2003

Same configuration steps as inventory service

Fourth, the principle analysis of Seata

Reference document: http://seata.io/zh-cn/docs/overview/what-is-seata.html

1. AT mode

One stage

1,解析SQL语义,找到"业务SQL"要更新的业务数据,在业务数据被更新前,将其保存成"before image"
2,执行"业务SQL"更新业务数据,在业务数更新之后
3,将其保存成"after image",最后生成行锁。
以上操作全部在一个数据库事务内完成,这样保证了一阶段操作的原子性。

image

Two-phase commit

Because "Business SQL" in a phase has been submitted to the database, so seata frame just to save a snapshot of a stage and row lock delete data , complete the data clean-up can be.

image

Two-stage rollback

If the second stage is a rollback, seata needs to roll back the "business SQL" that has been executed in the first stage to restore the business data.

The way to roll back is to use "before image" to restore business data; but before restoring, you must first check dirty writes, compare "database current business data" and "after image"

If the two pieces of data are completely consistent, it means that there is no dirty writing and the business data can be restored. If they are inconsistent, it means that there is dirty writing, and if dirty writing occurs, manual processing is required.

image

Guess you like

Origin blog.csdn.net/doubututou/article/details/109289551