【Seata】Deployment and integration of seata

1. Deploy Seata’s tc-server

1.Download

First we need to download the seata-server package, the address is http://seata.io/zh-cn/blog/download.html

Of course, pre-class materials are also prepared:

2. Unzip

Unzip this zip package in a non-Chinese directory. Its directory structure is as follows:

3. Modify configuration

Modify the registry.conf file in the conf directory:

The content is as follows:

registry { 
  # The registration center class of the tc service, here select nacos, it can also be eureka, zookeeper, etc. 
  type = "nacos" 
​nacos
  { 
    # The service name of the seata tc service registered to nacos, you can customize 
    application = "seata-tc- server" 
    serverAddr = "127.0.0.1:8848" 
    group = "DEFAULT_GROUP" 
    namespace = "" 
    cluster = "SH" 
    username = "nacos" 
    password = "nacos" 
  } 
} 
​config
{ 
  # How to read the configuration file of the tc server , here it is read from the nacos configuration center, so that if tc is a cluster, the configuration can be shared 
  type = "nacos" 
  # Configure the nacos address and other information 
  nacos { 
    serverAddr = "127.0.0.1:8848"
    namespace = ""
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
    dataId = "seataServer.properties"
  }
}

4. Add configuration in nacos

Special note that in order to allow the tc service cluster to share configurations, we chose nacos as the unified configuration center. Therefore, the server configuration file seataServer.properties file needs to be configured in nacos.

The format is as follows:

The configuration content is as follows:

# 数据存储方式,db代表数据库
store.mode=db
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=root
store.db.password=123
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
# 事务、日志等配置
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1 
server.maxRollbackRetryTimeout=-1 
server.rollbackRetryTimeoutUnlockEnable=false 
server.undo.logSaveDays=7 
server.undo.logDeletePeriod=86400000 
​#
Client and server transmission method 
transport.serialization=seata 
transport.compressor=none 
# Turn off the metrics function and improve performance 
metrics.enabled=false 
metrics.registryType=compact 
metrics.exporterList=prometheus 
metrics.exporterPrometheusPort=9898

==The database address, username, and password need to be modified to your own database information. ==

5. Create database tables

Special note: When the tc service manages distributed transactions, it needs to record transaction-related data into the database. You need to create these tables in advance.

Create a new database named seata and run the sql file provided in the pre-course material:

These tables mainly record global transactions, branch transactions, and global lock information:

​
SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;
​
-- ----------------------------
-- 分支事务表
-- ----------------------------
DROP TABLE IF EXISTS `branch_table`;
CREATE TABLE `branch_table`  (
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `transaction_id` bigint(20) NULL DEFAULT NULL,
  `resource_group_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `resource_id` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `branch_type` varchar(8) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `status` tinyint(4) NULL DEFAULT NULL,
  `client_id` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `application_data` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `gmt_create` datetime(6) NULL DEFAULT NULL,
  `gmt_modified` datetime(6) NULL DEFAULT NULL,
  PRIMARY KEY (`branch_id`) USING BTREE,
  INDEX `idx_xid`(`xid`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;
​
-- ----------------------------
-- 全局事务表
-- ----------------------------
DROP TABLE IF EXISTS `global_table`;
CREATE TABLE `global_table`  (
  `xid` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
  `transaction_id` bigint(20) NULL DEFAULT NULL,
  `status` tinyint(4) NOT NULL,
  `application_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `transaction_service_group` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `transaction_name` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `timeout` int(11) NULL DEFAULT NULL,
  `begin_time` bigint(20) NULL DEFAULT NULL,
  `application_data` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `gmt_create` datetime NULL DEFAULT NULL,
  `gmt_modified` datetime NULL DEFAULT NULL,
  PRIMARY KEY (`xid`) USING BTREE,
  INDEX `idx_gmt_modified_status`(`gmt_modified`, `status`) USING BTREE,
  INDEX `idx_transaction_id`(`transaction_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Compact;
​
SET FOREIGN_KEY_CHECKS = 1;

6. Start the TC service

Enter the bin directory and run seata-server.bat in it:

After successful startup, seata-server should have been registered in the nacos registration center.

Open the browser, visit the nacos address: http://localhost:8848 , and then enter the service list page. You can see the seata-tc-server information:

2. Microservice integration seata

1.Introduce dependencies

First, we need to introduce seata dependency into the microservice:

<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
    <exclusions>
        <!--版本较低,1.3.0,因此排除-->
        <exclusion>
            <artifactId>seata-spring-boot-starter</artifactId>
            <groupId>io.seata</groupId>
        </exclusion>
    </exclusions>
</dependency>
<!--seata starter 采用1.4.2版本-->
<dependency>
    <groupId>io.seata</groupId>
    <artifactId>seata-spring-boot-starter</artifactId>
    <version>${seata.version}</version>
</dependency>

2. Modify configuration file

You need to modify the application.yml file and add some configurations:

seata: 
  registry: # Configuration of the TC service registration center. The microservice goes to the registration center to obtain the tc service address based on this information. # 
    Refer to the configuration in the tc service's own registry.conf 
    type: nacos 
    nacos: # tc 
      server-addr: 127.0.0.1 :8848 
      namespace: "" 
      group: DEFAULT_GROUP 
      application: seata-tc-server # The service name of the tc service in nacos 
      cluster: SH 
  tx-service-group: seata-demo # Transaction group, based on which the cluster name of the tc service is obtained 
  service : 
    vgroup-mapping: # Mapping relationship between transaction group and TC service cluster 
      seata-demo: SH

3. High availability and remote disaster recovery of TC services

1. Simulate a TC cluster for remote disaster recovery

Plan to start two seata tc service nodes:

Node name IP address The port number Cluster name
set 127.0.0.1 8091 SH
seata2 127.0.0.1 8092 HZ

We have started a seata service before, the port is 8091, and the cluster name is SH.

Now, make a copy of the seata directory and name it seata2

Modify the content of seata2/conf/registry.conf as follows:

registry { 
  # The registration center class of the tc service, here select nacos, it can also be eureka, zookeeper, etc. 
  type = "nacos" 
​nacos
  { 
    # The service name of the seata tc service registered to nacos, you can customize 
    application = "seata-tc- server" 
    serverAddr = "127.0.0.1:8848" 
    group = "DEFAULT_GROUP" 
    namespace = "" 
    cluster = "HZ" 
    username = "nacos" 
    password = "nacos" 
  } 
} 
​config
{ 
  # How to read the configuration file of the tc server , here it is read from the nacos configuration center, so that if tc is a cluster, the configuration can be shared 
  type = "nacos" 
  # Configure the nacos address and other information 
  nacos { 
    serverAddr = "127.0.0.1:8848"
    namespace = ""
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
    dataId = "seataServer.properties"
  }
}

Enter the seata2/bin directory and run the command:

seata-server.bat -p 8092

Open the nacos console and view the service list:

Click to view details:

2. Configure transaction group mapping to nacos

Next, we need to configure the mapping relationship between tx-service-group and cluster to the nacos configuration center.

Create a new configuration:

The configuration content is as follows:

# 事务组映射关系
service.vgroupMapping.seata-demo=SH
​
service.enableDegrade=false
service.disableGlobalTransaction=false
# 与TC服务的通信配置
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
# RM配置
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
# TM配置
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
​
# undo日志配置
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
client.log.exceptionRate=100

3. Microservice reads nacos configuration

Next, you need to modify the application.yml file of each microservice to allow the microservice to read the client.properties file in nacos:

seata:
  config:
    type: nacos
    nacos:
      server-addr: 127.0.0.1:8848
      username: nacos
      password: nacos
      group: SEATA_GROUP
      data-id: client.properties

Restart the microservice. Now whether the microservice is connected to tc's SH cluster or tc's HZ cluster is determined by nacos' client.properties.

Guess you like

Origin blog.csdn.net/weixin_45481821/article/details/133213078