Gateway service current limit fuse downgrade & distributed transaction of seckill project

1. Gateway service current limit fuse downgrade

  1. Start the sentinel-dashboard console and Nacos registry service

Start the sentinel-dashboard console
First put the file path of the sentinel folder out and enter cmd, and enter in it

java -jar sentinel-dashboard-1.8.4.jar
  1. Introduce sentinel dependency in gateway service
<!-- sentinel -->
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>
  1. Configure sentinel in gateway service application.yml
spring:
  application:
    name: zmall-gateway
  cloud:
    nacos:
      discovery:
        server-addr: localhost:8848
    sentinel:
      transport:
        port: 9998 #跟控制台交流的端口,随意指定一个未使用的端口即可
        dashboard: localhost:8080 # 指定控制台服务的地址
      eager: true #当服务启动时是否与sentinel建立连接
      web-context-unify: false # 关闭URL PATH聚合
  1. Direct access to commodity services through the domain name, and log in to the sentinel console to configure service flow control and other information
    insert image description here
    Open the browser and enter the address: http://zmall.com/index.html

Enter the sentinel console and select the cluster point link. Enter the search keyword index in the search box
insert image description here

At this time, you will find that the flow control operation can only be performed on specific service resource links, but not on the specific entire service itself. Therefore, Ali hereby introduces the gateway current limiting method to solve the above problems.

  1. Re-add dependencies to the gateway service module pom.xml
<!-- sentinel gateway -->
<dependency>
    <groupId>com.alibaba.cloud</groupId>
    <artifactId>spring-cloud-alibaba-sentinel-gateway</artifactId>
</dependency>
  1. Re-refresh the commodity service, and then enter the sentinel console to check the link status.
    insert image description here
    This is to directly perform gateway current limiting and other operations on the microservice. Click flow control directly, set QPS=1, flow control mode=direct (default), flow control effect=fast fail (default), etc., and finally quickly refresh the product service address to view the flow control effect. At the same time, the flow control effect of the flow control can also be configured as a queue waiting method. When the traffic is too large, the queue waiting method is used to slowly digest the requests, so as to achieve the purpose of traffic cutting.

  2. For example
    , if someone maliciously attacks a website, then we add flow control
    insert image description here
    insert image description here

When someone maliciously refreshes the website, the website will automatically disconnect

insert image description here

2. Seata – Distributed Transactions

2.1 Distributed transaction basis

2.1.1 Transactions

A transaction refers to an operation unit, and all operations in this operation unit must eventually maintain a consistent behavior, either all operations
succeed, or all operations are cancelled. Simply put, transactions provide a "do nothing or do all" mechanism.

2.1.2 Local Transactions

Local transactions can actually be considered as transaction mechanisms provided by the database. When it comes to database transactions, we have to say that there are four characteristics in database transactions
:

  • A: Atomicity, all operations in a transaction are either completed or not completed
  • C: Consistency, the database must be in a consistent state before and after a transaction is executed
  • I: Isolation, in a concurrent environment, when different transactions operate on the same data at the same time, the transactions do not affect each other
  • D: Durability, which means that as long as the transaction ends successfully, the updates it makes to the database must be permanently saved

When a database transaction is implemented, all operations involved in a transaction will be incorporated into an inseparable execution unit. All operations in this execution
unit succeed or fail. As long as any operation fails, it will cause Rollback of the entire transaction

2.1.3 Distributed transactions

Distributed transaction means that the participants of the transaction, the server supporting the transaction, the resource server and the transaction manager are respectively located
on different nodes of different distributed systems.
Simply put, a large operation is composed of different small operations. These small operations are distributed on different servers and belong to different
applications. Distributed transactions need to ensure that these small operations either all succeed or all fail.
In essence, distributed transactions are to ensure data consistency in different databases.

2.1.4 Distributed transaction scenario

  • When a single system accesses multiple databases,
    a service needs to call multiple database instances to complete data addition, deletion, and modification operations

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-iUhyIGnM-1676028166309)(springcloud_alibaba\1014.png)]

  • Multiple microservices access the same database
    Multiple services need to call a database instance to complete data addition, deletion and modification operations

Please add a picture description

  • Multiple microservices access multiple databases
    Multiple services need to call a database instance to complete data addition, deletion, and modification operations

Please add a picture description

2.2 Distributed transaction solution

2.2.1 Global Transactions

Global transactions are implemented based on the DTP model. DTP is a distributed transaction model proposed by the X/Open organization - X/Open
Distributed Transaction Processing Reference Model. It stipulates that to implement distributed transactions, three roles are required:

  • AP: Application application system (microservice)
  • TM: Transaction Manager Transaction Manager (Global Transaction Management)
  • RM: Resource Manager Resource Manager (database)

The whole transaction is divided into two phases:

  • Phase 1: In the voting phase, all participants pre-submit the execution of the transaction, and send feedback on whether it is successful or not to the coordinator.
  • Phase 2: In the execution phase, the coordinator notifies all participants to commit or roll back in unison based on the feedback from all participants
    .

insert image description here

advantage

  • The probability of data consistency is improved, and the implementation cost is low

shortcoming

  • Single point of problem: transaction coordinator down

  • Synchronous blocking: delays the submission time and lengthens the resource blocking time

  • Data inconsistency: In the second phase of submission, the commit result is still unknown, which may lead to data inconsistency

    reliable messaging service

    The solution based on reliable message service is to ensure the consistency of upstream and downstream application data operations through message middleware. Suppose there are two
    systems, A and B, which can handle task A and task B respectively. At this time, there is a business process that needs to process task A and task B in the same transaction
    . You can use message middleware to realize this distributed transaction.
    insert image description here

    Step 1: The message is delivered to the middleware by system A

  1. Before system A processes task A, it first sends a message to the message middleware
  2. The message middleware persists the message after receiving it, but does not deliver it. After the persistence is successful, reply a confirmation response to A
  3. After system A receives the confirmation response, it can start processing task A
  4. After task A is processed, a Commit or Rollback request is sent to the message middleware. After the request is sent, for system A
    , the processing of the transaction is over
  5. If the message middleware receives Commit, it will deliver the message to system B; if it receives Rollback, it will directly discard the message. However,
    if the message middleware cannot receive the Commit and Rollback instructions, it must rely on the "timeout query mechanism".

Timeout query mechanism
In addition to realizing normal business processes, system A also needs to provide a transaction query interface for call by message middleware
. When the message middleware receives the release message, it starts counting. If the confirmation instruction is not received after the timeout, it will actively call the
transaction query interface provided by system A to inquire about the current status of the system. The interface will return three results, and the middleware
will react differently according to the three results:
Submit: deliver the message to system B
Rollback: directly discard the message

Processing: continue to wait

Step 2: The message is delivered by the middleware to system B.
After the message middleware delivers the message to the downstream system, it enters the blocking waiting state. The downstream system immediately processes the task, and
returns a response to the message middleware after the task processing is completed.

  • If the message middleware receives the confirmation reply, it considers that the transaction is completed
  • If the message middleware waits for the confirmation response to time out, it will re-deliver until the downstream consumer returns a consumption success response.

General message middleware can set the number of message retries and the time interval. If the message cannot be successfully delivered in the end, manual intervention is required
. The reason why manual intervention is used here instead of using system A to roll back is mainly because of the complexity of the entire system design
.
For distributed transactions based on reliable message services, the first half uses asynchrony and focuses on performance; the second half uses synchronization and focuses on development costs.

2.2.2 Best Effort Notification

Best-effort notifications, also known as periodic proofreading, are actually a further optimization of the second solution. It introduces a local message table to
record error messages, and then adds a regular proofreading function for failed messages to further ensure that messages will be consumed by downstream systems.
insert image description here

Step 1: The message is delivered to the middleware by system A

  1. Write a record to the local message table in the same transaction that processes the business
  2. Prepare a dedicated message sender to continuously send messages in the local message table to the message middleware, and retry if the sending fails

Step 2: The message is delivered to system B by the middleware

  1. After receiving the message, the message middleware is responsible for synchronously delivering the message to the corresponding downstream system, and triggering the task execution of the downstream system
  2. When the downstream system processes successfully, it will feed back a confirmation response to the message middleware, and the message middleware can delete the message, thus
    completing the transaction
  3. For messages that fail to be delivered, use the retry mechanism to retry, and for those that fail to retry, write to the error message table
  4. The message middleware needs to provide a query interface for failure messages, and the downstream system will periodically query the failure messages and consume them

The pros and cons of this approach:

  • Advantages: A very classic implementation that achieves eventual consistency.
  • Disadvantages: The message table will be coupled to the business system. If there is no packaged solution, there will be a lot of chores to deal with.

2.2.3 TCC affairs

TCC is Try Confirm Cancel, which is a compensating distributed transaction. TCC implements distributed transactions in three steps:

  • Try: Try the business to be executed
    This process does not execute the business, but completes the consistency check of all businesses and reserves all the resources required for execution
  • Confirm: Confirm the execution of the business
    Confirm the execution of the business operation without any business checks, and only use the business resources reserved in the Try phase. Usually, when TCC is adopted,
    it is believed that the Confirm stage is error-free. That is: as long as the Try succeeds, the Confirm must succeed. If there
    is an error in the Confirm stage, a retry mechanism or manual processing is required.
  • Cancel: Cancel the business to be executed
    Cancel the business resources reserved in the Try stage. Usually, the adoption of TCC means that the Cancel stage must be successful. If
    something really goes wrong in the Cancel phase, a retry mechanism or manual processing is required.

insert image description here
insert image description here

The difference between TCC two-phase commit and XA two-phase commit is:
XA is a distributed transaction at the resource level with strong consistency. During the entire process of two-phase commit, resource locks are always held.
TCC is a distributed transaction at the business level, with eventual consistency, and does not always hold resource locks.
Pros and cons of TCC transactions:

  • Advantages: The second-phase submission of the database layer is referred to the application layer to achieve, avoiding the problem of low 2PC performance of the database layer.

  • Disadvantages: TCC's Try, Confirm, and Cancel operation functions need to be provided by the business, and the development cost is high.

3. Introduction of Seata

In January 2019, the Alibaba middleware team launched the open source project Fescar (Fast & EaSy Commit And
Rollback), whose vision is to make the use of distributed transactions as simple and efficient as the use of local transactions, and gradually solve the problems of developers
All the difficulties encountered in distributed transactions. It was later renamed Seata, meaning: Simple Extensible Autonomous
Transaction Architecture, which is a set of distributed transaction solutions.
The design goal of Seata is non-intrusive to business, so it starts from the 2PC solution without business intrusion, and evolves on the basis of traditional 2PC.
It understands a distributed transaction as a global transaction that includes several branch transactions.
The responsibility of the global transaction is to coordinate the branch transactions under its jurisdiction to reach a consensus, either successfully commit together, or fail to roll back together. Furthermore, usually the branch transaction itself is a
native transaction of the relational database.

2PC is the two-phase commit protocol, which divides the entire transaction process into two phases, the prepare phase and the commit phase, 2 refers to the two phases, P refers to the prepare phase, and C refers to the commit phase .

insert image description here

Seata mainly consists of three important components:

  • TC: Transaction Coordinator , which manages the status of global branch transactions and is used for committing
    and rolling back global transactions.
  • TM: Transaction Manager , used to open, commit or rollback global transactions.
  • RM: Resource Manager resource manager , used for resource management on branch transactions, registers branch transactions with TC, reports
    the status of branch transactions, and accepts commands from TC to commit or roll back branch transactions.

insert image description here

Use case description:

Business logic for users to purchase goods:

  • Storage service: deduct the storage quantity for a given commodity.

  • Order service: Create orders based on procurement requirements.

  • Account service: deduct the balance from the user account.

  • Business service: When creating an order, it is necessary to complete the deduction of the product inventory and the deduction of the user account balance.

The execution flow of Seata is as follows:

  1. Business service TM applies to open a global transaction to TC, and TC will create a global transaction and return a unique XID

  2. The RM of the Storage service registers the branch transaction with the TC, and it is included in the jurisdiction of the global transaction corresponding to the XID

  3. The Storage service executes branch transactions, performs operations on the database, and deducts the storage quantity for a given product

  4. The RM of the Order service registers the branch transaction with the TC, and incorporates it into the jurisdiction of the global transaction corresponding to the XID

  5. The Order service executes branch transactions, operates on data, and creates orders

  6. The Order service starts to call the Account service remotely. At this time, the XID will be propagated on the microservice call chain

  7. The RM of the Account service registers the branch transaction with the TC and brings it into the jurisdiction of the global transaction corresponding to the XID

  8. The Account service executes branch transactions, operates on the database, and deducts the balance from the user account

  9. After the global transaction call chain is processed, TM initiates a global transaction commit or rollback to TC according to whether there is an exception

  10. TC coordinates all branch transactions under its jurisdiction and decides whether to roll back

Differences between Seata's implementation of 2PC and traditional 2PC:

  1. In terms of architecture level, the RM of the traditional 2PC solution is actually at the database layer. RM is essentially the database itself and is implemented through the XA
    protocol . Seata’s RM is deployed on the application side as a middleware layer in the form of a jar package. of.
  2. In terms of two-phase commit, traditional 2PC does not release locks on transactional resources until
    Phase 2 is completed, regardless of whether the resolution in the second phase is commit or rollback. Seata's approach is to submit local transactions in Phase 1, which saves
    the time of Phase 2 holding locks and improves overall efficiency.

4. Seata implements distributed transaction control

This example implements distributed transactions through Seata middleware, simulating the process of placing an order and deducting inventory in e-commerce.
We execute the ordering operation through the order microservice, and then the order microservice calls the product microservice to deduct inventory

insert image description here

4.1 Basic code of the case (abnormal simulation)

4.1.1 Modify product microservice

IProductService interface**

public interface IProductService extends IService<Product> {
    void updateStock(Integer pid,Integer num);
}

ProductServiceImpl

@Service
public class ProductServiceImpl extends ServiceImpl<ProductMapper, Product> implements IProductService {

    @Transactional
    @Override
    public void updateStock(Integer pid, Integer num) {
        //根据商品pid获取商品信息
        Product product = this.getById(pid);
        //判断库存商品是否大于购物商品数量
        if(product.getStock()>num){
            product.setStock(product.getStock()-num);
            //根据商品pid修改商品数量
            this.updateById(product);
        }else{
            throw new RuntimeException("库存不足");
        }
    }
}

ProductController

@Controller
public class ProductController {

    @RequestMapping("/updateStock/{pid}/{num}")
    @ResponseBody
    public void updateStock(@PathVariable("pid") Integer pid,
                            @PathVariable("num") Integer num){
        productService.updateStock(pid,num);
    }
}

4.1.2 Modify the order microservice

Configure on the order microservice startup class configuration@EnableFeignClients

ApiProductService

@FeignClient("zmall-product")
public interface ApiProductService {

    @RequestMapping("/updateStock/{pid}/{num}")
    void updateStock(@PathVariable("pid") Integer pid,
                     @PathVariable("num") Integer num);
}

IOrderService

public interface IOrderService extends IService<Order> {
    Order createOrder(Integer pid,Integer num);
}

OrderServiceImpl

@Service
public class OrderServiceImpl extends ServiceImpl<OrderMapper, Order> implements IOrderService {

    @Autowired
    private ApiProductService productService;

    @Transactional
    @Override
    public Order createOrder(Integer pid, Integer num) {
        //根据商品ID修改商品对应的库存
        productService.updateStock(pid,num);
        //新增订单
        Order order=new Order();
        //此处只是做模拟操作
        this.save(order);
        return order;
    }
}

OrderController

@Controller
public class OrderController {

    @Autowired
    private IOrderService orderService;

    @RequestMapping("/createOrder/{pid}/{num}")
    @ResponseBody
    public Order createOrder(@PathVariable("pid") Integer pid,
                             @PathVariable("num") Integer num){
        return orderService.createOrder(pid,num);
    }
}

4.1.3 Exception Simulation

Simulate an exception in OrderServiceImpl's code

@Transactional
@Override
public Order createOrder(Integer pid, Integer num) {
    //根据商品ID修改商品对应的库存
    productService.updateStock(pid,num);
    //异常模拟
    int i = 1 / 0;
    //新增订单
    Order order=new Order();
    //此处只是做模拟操作
    this.save(order);
    return order;
}

zmall-order startup class

package com.zking.zmall;

import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.openfeign.EnableFeignClients;

@EnableFeignClients
@EnableDiscoveryClient
@MapperScan({"com.zking.zmall.mapper"})
@SpringBootApplication
public class ZmallOrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(ZmallOrderApplication.class, args);
    }

}

test address

http://order.zmall.com/createOrder/733/1

Test Results
insert image description here

will find transaction inconsistency
insert image description here

4.2 Start Seata

4.2.1 Download Seata

Download address: click to jump to download

4.2.2 Modify configuration file and initialize

  • seata-server-1.4.0.zip

Decompress the downloaded seata-server-1.4.0.zip (non-source package) compressed package and enter the conf directory
insert image description here

Create a seata database locally
insert image description here

Modify file.conf
insert image description here

1. Change mode="type" to mode="db"
2. Set the database name, user account and password

Create the database Seata and initialize the data table
Unzip the seata-1.4.0 source code package, and enter the seata-1.4.0\script\server\db directory, copy and run the mysql.sql script to complete the seata server database initialization work.
insert image description here
Modify registry.conf (in non-source package)

registry {
    
    
    type = "nacos"                     #这里使用nacos为注册中心,将type修改成nacos
    nacos {
    
    
        application = "seata-server"   #注册的服务名
        serverAddr = "127.0.0.1:8848"  #nacos注册中心地址及端口
        group = "SEATA_GROUP" #服务注册分组
        namespace = ""        #namespace是服务注册时的命名空间,可不填,不填默认public
        cluster = "default"   #默认即可
        username = "nacos"    #nacos的登录账号
        password = "nacos"    #nacos的登录密码
    }
}
config {
    
    
    type = "nacos"
    nacos {
    
    
        serverAddr = "127.0.0.1:8848"
        namespace = ""
        group = "SEATA_GROUP"
        username = "nacos"
        password = "nacos"
    }
}

insert image description here

registry: specify the registration center, register the seata-server to the specified location
config: specify the configuration center

Configure seata-1.4.0.zip (source package)
to modify the config.txt configuration in the script/config-center directory in seata-1.4.0:

store.mode=db    #修改存储方式为db
...
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true   #修改数据库名
store.db.user=root      #修改数据库账号
store.db.password=1234  #修改数据库密码

insert image description here

Initialize Seata configuration to Nacos
Right-click in the seata-1.4.0\script\config-center\nacos directory and select git bash here to run the git command window. and enter the following command:

`sh nacos-config.sh -h localhost -p 8848 -g SEATA_GROUP -u nacos -w nacos`

insert image description here

After the execution is successful, you can open the Nacos console. In the configuration list, you can see that many
configurations with Group SEATA_GROUP initialized.
Parameter Description:

parameter illustrate
-h The IP address of the nacos registration center, the default is localhost
-p nacos registration center port, the default is 8848
-g Nacos configuration center grouping, the default is SEATA_GROUP
-t Nacos configuration center namespace namespace name, the default is '', which is the public default namespace
-u nacos registration center account
-w nacos registration center password

Appendix: seata_gc.log

Java HotSpot(TM) 64-Bit Server VM (25.144-b01) for windows-amd64 JRE (1.8.0_144-b01), built on Jul 21 2017 21:57:33 by "java_re" with MS VC++ 10.0 (VS2010)
Memory: 4k page, physical 16467308k(7485656k free), swap 32932716k(12657228k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=75 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=E:\temp\T280\20230205\01\seata\bin\\../logs/java_heapdump.hprof -XX:InitialHeapSize=2147483648 -XX:MaxDirectMemorySize=1073741824 -XX:MaxHeapSize=2147483648 -XX:MaxMetaspaceSize=268435456 -XX:MaxNewSize=1073741824 -XX:MetaspaceSize=134217728 -XX:NewSize=1073741824 -XX:-OmitStackTraceInFastThrow -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:SurvivorRatio=10 -XX:ThreadStackSize=512 -XX:-UseAdaptiveSizePolicy -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:-UseLargePagesIndividualAllocation -XX:+UseParallelGC 

Solution:
Refer to: Link
My MySQL is version 8.0, and the above error will not occur in version 5.7;

This is the MySQL driver that comes with seata. The version of mysql-connector-java-5.1.35.jar is too low, causing the service to never start;
insert image description here

You have to replace it with mysql-connector-java-5.1.44.jar before it can run

4.3 Start the seata service

Directly enter the seata\bin directory of the seata service, double-click to run the seata-server.bat (in the source package) file. Or use the following command to run:

cd bin
seata-server.bat -p 9000 -m file
seata-server.bat -h ip地址 -p 9000 -m file

insert image description here

4.4 Using Seata to implement transaction control

4.4.1 Initialize the data table

Enter the source package seata-1.4.0\script\client\at\db directory, copy and run the mysql.sql database script to complete the creation of the undo_log table, which is the table used by Seata to record transaction logs.

CREATE TABLE `undo_log`
(
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`branch_id` BIGINT(20) NOT NULL,
`xid` VARCHAR(100) NOT NULL,
`context` VARCHAR(128) NOT NULL,
`rollback_info` LONGBLOB NOT NULL,
`log_status` INT(11) NOT NULL,
`log_created` DATETIME NOT NULL,
`log_modified` DATETIME NOT NULL,
`ext` VARCHAR(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)
) ENGINE = INNODB
AUTO_INCREMENT = 1
DEFAULT CHARSET = utf8;

4.4.2 Add configuration

  • add dependencies

Introduce the following dependencies in zmall-order and zmall-product modules respectively:

<dependency>
	<groupId>com.alibaba.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
</dependency>
<!-- 如果已经添加了nacos配置中心依赖,则可以不加入 -->
<dependency>
	<groupId>com.alibaba.cloud</groupId>
	<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
  • DataSourceProxyConfig

Seata implements transaction branching through the proxy data source, so it is necessary to configure the Bean of io.seata.rm.datasource.DataSourceProxy, which is the default data source of @Primary, otherwise the transaction will not be rolled back and distributed transactions cannot be realized. Please add the DruidDataSource configuration class in zmall-order , as follows:

package com.zking.zmall.config;

import com.alibaba.druid.pool.DruidDataSource;
import io.seata.rm.datasource.DataSourceProxy;
import io.seata.rm.datasource.xa.DataSourceProxyXA;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;

import javax.sql.DataSource;

@Configuration
public class DataSourceProxyConfig {
    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DruidDataSource druidDataSource() {
        return new DruidDataSource();
    }

    @Primary
    @Bean("dataSourceProxy")
    public DataSource dataSource(DruidDataSource druidDataSource) {
        //AT模式
        return new DataSourceProxy(druidDataSource);
        //XA模式
        //return new DataSourceProxyXA(druidDataSource);
    }
}

Exclude the DataSource data source auto-configuration class on the startup class

@EnableFeignClients
@EnableDiscoveryClient
@MapperScan({"com.zking.zmall.mapper"})
@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
public class ZmallOrderApplication {

    public static void main(String[] args) {
        SpringApplication.run(ZmallOrderApplication.class, args);
    }
}

Perform the following configurations in microservices that require distributed transactions:

  • application.yml
spring:
  datasource:
    #type连接池类型 DBCP,C3P0,Hikari,Druid,默认为Hikari
    type: com.alibaba.druid.pool.DruidDataSource
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://localhost:3306/zmall?characterEncoding=utf8&useSSL=false&serverTimezone=Asia/Shanghai&rewriteBatchedStatements=true
    username: root
    password: 1234

Registry.conf
adds the configuration file registry.conf of seata in the resources directory of the microservice module. The configuration file comes from the configuration file in the seata-server/conf directory.

registry {
    
    
    type = "nacos"
    nacos {
    
    
        serverAddr = "localhost"
        namespace = "public"
        cluster = "default"
    }
}
config {
    
    
    type = "nacos"
    nacos {
    
    
        serverAddr = "localhost"
        namespace = "public"
        cluster = "default"
    }
}

bootstrap.yaml

spring:
  application:
    name: zmall-product (根据自己模块名改)
  cloud:
    nacos:
      config:
        server-addr: localhost:8848 # nacos的服务端地址
        namespace: public
        group: SEATA_GROUP
    alibaba:
      seata:
        tx-service-group: my_test_tx_group

Please pay attention to the spring.cloud.alibaba.seata.tx-service-group=my_test_txgroup configuration. The name of my_test_tx_group here must be consistent with the name in the config.txt configuration file in the seata source package. Important, important, important! ! !

Please open the config.txt under the seata source package directory seata-1.4.0\script\config-center\, as follows:
insert image description here

4.4.3 Start the global transaction in the order microservice

@GlobalTransactional  //seata全局事务控制
@Transactional
@Override
public Order createOrder(Integer pid, Integer num) {
    
    
    //根据商品ID修改商品对应的库存
    productService.updateStock(pid,num);
    //模拟程序执行报错
    int i=1 / 0;
    //新增订单
    Order order=new Order();
    //此处只是做模拟操作
    this.save(order);
    return order;
}

4.4.4 Testing

Place an order test again, open the browser and enter the test address: http://order.zmall.com/createOrder/733/1

insert image description here
Then check the product commodity service window again, and you can find that the SQL statement for commodity inventory deduction has been generated, but due to the intervention of seata's distributed transaction, the inventory quantity of the corresponding commodity in the database has not been successfully deducted, which proves that the seata distributed transaction configuration is successful. .
insert image description here

4.5 Seata operation process analysis

insert image description here
Key points:
1. Each RM uses DataSourceProxy to connect to the database. Its purpose is to use ConnectionProxy. The purpose of using data source and data
connection proxy is to submit undo_log and business data in a local transaction in the first stage, which saves As long as there are business
operations, there must be undo_log.
2. In the first stage, the undo_log stores the values ​​before and after data modification to prepare for the transaction rollback, so the
branch transaction has been submitted after the first stage is completed, and the lock resource is released.
3. TM starts the global transaction, puts the XID global transaction id in the transaction context, and transfers the XID to the downstream branch
transaction through the feign call. Each branch transaction associates its own Branch ID with the XID.
4. In the second stage of global transaction submission, TC will notify each branch participant to submit the branch transaction. The branch
transaction Here, each participant only needs to delete the undo_log, and it can be executed asynchronously. The second phase can be completed soon.
5. In the second stage of global transaction rollback, TC will notify each branch participant to roll back the branch transaction, find the corresponding
rollback log through XID and Branch ID, and generate reverse SQL through the rollback log and execute it to complete the branch The transaction is rolled back to the previous state, and if the rollback fails,
the rollback operation will be retried.

Guess you like

Origin blog.csdn.net/qq_63531917/article/details/128975592