Tencent and Renmin University of China open source the latest research results: 3TS Tencent transaction processing technology verification system

Author: Li Haixiang Tencent TEG database technology experts

One is the world's leading technology company, and the other is the cradle of basic academic research on databases in China. Recently, the Renmin University of China-Tencent Collaborative Innovation Lab officially held an unveiling ceremony. It is understood that the two parties have focused on the field of database basic research for many years of cutting-edge industry-university-research cooperation, as well as the database talent cooperation training plan, while advancing database security and control, while carrying out cutting-edge innovation research reserves for the future large-scale multi-scenario digital era. The laboratory's output including "full-time database system" and other achievements have been included in VLDB and other international top conferences, and at the same time applied for a number of national technology patents.

At the same time as the unveiling of the laboratory, the research team of Tencent and Renmin University of China also open sourced a new collaborative research result-3TS Tencent transaction processing technology verification system.

Tencent Transaction Processing Testbed System (3TS) is a verification system for database transaction processing jointly developed by Tencent's self-developed financial-grade distributed database TDSQL team and the Key Laboratory of Data Engineering and Knowledge Engineering of the Ministry of Education of Renmin University of China. The system aims to design and construct a unified framework for transaction processing (including distributed transactions), and through the access interface provided by the framework, it is convenient for users to quickly build new concurrency control algorithms; the test bed provided by the verification system can facilitate users according to According to the needs of application scenarios, perform a fair performance comparison of the current mainstream concurrency control algorithms in the same test environment, and select an optimal concurrency control algorithm. At present, the verification system has integrated 13 mainstream concurrency control algorithms, and provides common benchmark tests such as TPC-C, Sysbench, and YCSB. 3TS also further provides a consistency level test benchmark, aiming at the difficulty of system selection caused by the blowout development of distributed database systems at this stage, and provides consistency level discrimination and performance test comparison.

The 3TS system aims to deeply explore the relevant theories and implementation technologies of database transaction processing. Its core concept is: openness, depth, and evolution. Openness, adhering to the heart of open source, sharing knowledge and sharing technology; depth, practicing the spirit of systematic research, researching the essential issues of transaction processing technology, not breaking Loulan and never returning; evolution is a long way to go, I will Search up and down, keep moving forward, keep pushing forward.

1. 3TS overall architecture

 

As a framework related to transaction processing technology, the essential issues that 3TS is committed to exploring mainly include:

 

  • How many data anomalies are there in the world? How to establish a systematic research method for abnormal data?

  • Why are there many kinds of concurrent access control algorithms? Are there any essential relationships among various concurrent access control algorithms?

  • After the stand-alone transaction database is distributed, what aspects (availability? reliability? security? consistency? scalability? function? performance? architecture?...) will be affected?

  • What new technologies will affect and how will they affect distributed transactional database systems?

  • How should the evaluation and evaluation system of distributed transactional database system be established?

At the code level, each of the above research questions has a corresponding subsystem. For example, the first open source includes the 3TS-DA subsystem and the 3TS-Deneva subsystem.

 

2. 3TS-DA, data abnormal subsystem

 

The 3TS-DA data abnormal subsystem is located under the src/3ts/da path, and its project structure is shown in the figure below:

  • History Creator: Responsible for generating history and outputting it to the algorithm for verification.

  • CCA Group: CCA stands for Concurrent access control algorithm, which can perform anomaly detection on the incoming history and return anomaly detection results.

  • Outputter: Responsible for outputting the detection results of each CCA on the current history to the file.

 

3TS-DA subsystem current functions:

 

  • Test data generation: supports three types of history generation methods, traversal generation, random generation, and reading from text.

  • Algorithm addition: Provides a unified algorithm interface, which can add new concurrent algorithms more conveniently. At the same time, the framework itself has a variety of algorithms, including: serializable, conflict serializable, SSI, WSI, BOCC, FOCC, etc.

  • Test indicators: The framework provides a variety of test indicators, including: algorithm rollback rate, true rollback rate, false rollback rate, execution time, etc.

  • Anomaly expansion: The framework implements the data anomaly expansion algorithm, which can be expanded to generate an unlimited number of data anomaly history for algorithm testing

3. 3TS-Deneva, concurrent algorithm framework

 

Devena[1] is an evaluation framework of a distributed memory database concurrent access control algorithm open sourced by MIT. The original code is located at https://github.com/mitdbg/deneva. It can study the performance characteristics of concurrency control protocols in a controlled environment. The framework provides mainstream algorithms such as Maat, OCC, TO, Locking (No Wait, Wait Die), and Calvin . 3TS-Deneva is an improvement of Tencent's original system based on Deneva, including multiple levels. Among them, at the algorithm level, more concurrent access control algorithms have been added, including: serializable, conflict serializable, SSI, WSI, BOCC, FOCC, Sundial, Silo, etc.

3.1 Infrastructure


Devena uses a custom engine and some other settings, and can deploy and implement different concurrency control protocols on this platform for a fair evaluation as possible. The architecture of the system, as shown in Figure 1, mainly includes two modules:

  • The client instance, as the initiator of the transaction. Each thread in the client is responsible for initiating transaction requests, placing the initiated transaction requests in the message queue, and sending them to the server in order for specific execution. The client and server instances are in a fully connected topology and are generally deployed on different machines;

  • Server-side instance, which specifically executes various operations in the transaction. There are data in different server instances, and the data is indexed by consistent hashing, thereby forming a global partition mapping table between the server IP and the stored data. By controlling the partition mapping not to be modified during the test, the mapping table is guaranteed to be All nodes can be accurately obtained. The communication between client and server instance, server and server instance uses TCP/IP protocol. Each server instance can be subdivided into four modules:

    • Enter the message queue to temporarily store the messages sent by the client/other server;

    • The execution engine allocates multiple worker threads to parse and actually execute the messages in the message queue, using a resource scheduling method that binds one core to one thread;

    • Concurrency control module, the worker thread must maintain the information required by the specific concurrency control protocol during the execution of the transaction operation, and execute the process specified in the protocol, so as to ensure that the specified concurrency control protocol takes effect;

    • The data storage module is responsible for managing the data in this instance and storing the data in the memory.

 

Figure 1 Deneva system architecture diagram

 

In 3TS, Deneva was improved. The improved code is located under the contrib/deneva path, and its internal project structure is shown in Figure 2:

Figure 2 Deneva implementation framework diagram

  • Deneva is divided into two types of nodes: Server and Client.

  • Client is used to generate and send the load query to be executed to the server. Server is used to coordinate the execution of the load query sent by the Client.

  • Client and Server share modules:

    • MsgQueue: The message queue stores the msg to be sent.

    • MsgThread: The message sending thread, which cyclically takes out msg from MsgQueue and sends it out.

    • InputThread: message receiving thread, receiving information from Server/Client.

  • Proprietary module on Client:

    • ClientQueryQueue: The client's query queue, which stores the query list generated before the test starts.

    • ClientThread: The client thread, cyclically takes out the query from the ClientQueryQueue, and sends it to the Server through MsgThread and MsgQueue.

  • Proprietary module on Server

    • WorkQueue: The pending msg queue of the server. After InputThread receives the msg, it will be put into the WorkQueue.

    • WorkThread: The execution thread of the server, which fetches msg from WorkQueue to perform processing. After the execution is completed, it will generate and return msg, which is also sent out through MsgThread and MsgQueue.

 

3.2 Transaction execution process

 

In Deneva, as shown in Figure 3, the execution flow of a transaction is:

1) The client first initiates a transaction request (composed of multiple operations), and puts the transaction in the ClientQueryQueue. ClientThread will remove the transaction request from the queue and store it in the message queue MsgQueue. After that, the message sending thread will take out the operation set of a certain transaction from MsgQueue, encapsulate it as a Request, and send it to a certain server (determined by the data accessed by the first operation) as the coordinating node of this transaction;

2) After the Request arrives at the server, the server parses the request first, and puts all operations of this transaction as an element in the work queue (WorkQueue). Placed in the work queue are new transactions from the client and remote operations of transactions that have already started. The latter has a higher priority than the former in the queue. The threads in the worker thread pool poll the work queue and process operations in the transaction. When a worker thread processes the operation of the current transaction, it first initializes the transaction, then performs read and write operations in order, and finally commits or rolls back;

  • In the process of executing the transaction, there are two situations that will cause the transaction to enter the wait, one is to wait for the exclusive lock on a certain resource to be released; the other is to access the data in the remote server. When remote access to data in other servers requires waiting, the remote server will return the WAIT instruction to the current coordinating node. The coordinating node will temporarily store the waiting state of the current transaction and schedule the current worker thread to perform other transaction operations, thereby avoiding Blocking of worker threads. When a waiting transaction can continue to execute, based on the priority scheduling strategy of the work queue, it will continue to be executed by the first available worker thread;

  • Additional operations required by the concurrency control protocol will be embedded in the transaction execution process, including read and write operations, verification operations, commit/rollback operations, etc.

3) When the coordinating node completes the operation of a certain transaction, it will put the transaction execution result into the message queue, and then the message sending thread will notify the client of the execution result of the current transaction.

Figure 3 Deneva transaction execution flow chart

4. Overview of distributed transactions

 

Literature [15] defines distributed transaction as:

A distributed transaction is a database transactionin which two or more network hosts are involved. Usually, hosts providetransactional resources, while the transaction manager is responsible forcreating and managing a global transaction that encompasses all operationsagainst such resources. Distributed transactions, as any other transactions,must have all four ACID (atomicity, consistency, isolation, durability)properties, where atomicity guarantees all-or-nothing outcomes for the unit ofwork (operations bundle).

Distributed transaction takes distributed system as the physical basis, and realizes the semantic requirements of transaction processing, that is, ACID characteristics must also be met on the distributed system. Therefore, the distributed transaction processing of distributed databases must also follow the transaction-related theories under the stand-alone database system, ensure that each transaction meets the requirements of ACID, and use distributed concurrent access control technology to deal with data abnormalities in the distributed system. Realize distributed transaction ACID characteristics.

The basic technology of the distributed transaction processing mechanism is based on the transaction processing technology in the stand-alone database system, but there are some differences, such as how to deal with distributed data exceptions, how to achieve serialization under distributed architecture, and how to do it Atomic commit across nodes, how to do a good job of responding to transactions with network partitions or higher latency, etc.

The 3TS framework is a distributed environment. These contents will be implemented and verified in 3TS.

 

5. Concurrent access control algorithm provided by 3TS

 

Thirteen concurrent few you control algorithms are currently integrated in 3TS, mainly including:

 

(1) Two-stage blockade agreement (2PL: No-Wait, Wait-Die)

(2) Time stamp sequencing protocol (T/O)

(3) Multi-version Concurrency Control Protocol (MVCC)

(4) Optimistic concurrency control protocol (OCC, FOCC, BOCC)

(5) Optimized optimistic concurrency control protocol (MaaT, Sundial, Silo)

(6) Deterministic Concurrency Control Protocol (Calvin)

(7) Concurrency control protocol based on snapshot isolation (SSI, WSI)

The following is a brief introduction to these concurrency control protocols in turn:

 

5.1 Two-Phase Lockdown Agreement (2PL)

  

Two-phase Locking (2PL) is currently the most widely used concurrency control protocol. 2PL relies on acquiring shared locks or exclusive locks when read and write operations occur to synchronize conflicting operations between transactions. According to Bernstein and Goodman's description [2], 2PL has the following two rules for lock acquisition: 1) There cannot be two conflicting locks on the same data item at the same time; 2) After a transaction releases any lock, No more locks can be acquired. The second rule stipulates that the lock operation in a transaction is divided into two phases: the growing phase and the shrinking phase. In the growth phase, the transaction will acquire locks for all records that it needs to access. Read operations acquire shared locks, and write operations acquire mutex locks. Shared locks are non-conflicting, and mutex locks conflict with shared locks or other mutex locks. Once a transaction releases any of the locks, the transaction enters the second phase of 2PL, which is called the decay phase. After entering this stage, the transaction is not allowed to acquire a new lock.

3TS implements the Strict 2PL (Strict 2PL) protocol that does not release the lock until the transaction is committed or terminated. According to the different methods to avoid deadlock, the 2PL implemented in 3TS includes two types: 2PL (No-Wait) and 2PL (Wait-Die), and its implementation follows the description of these two protocols in [2,3]:

 

5.1.1 2PL(No-Wait)

 

The agreement stipulates that when a lock conflict is found when a transaction tries to lock, the transaction that is currently requesting the lock is rolled back. The locks held by the rolled back transaction will be released, allowing other transactions to acquire the lock. The No_Wait mechanism can avoid deadlock, because the wait between transactions will not form a loop. However, not every lock conflict will cause a deadlock, so the rollback rate may be higher.

 

5.1.2 2PL(Wait-Die)

 

Rosenkrantz [3] proposed 2PL (Wait-Die), using the transaction start timestamp as the priority to ensure that the lock waiting relationship is consistent with the timestamp sequence. As long as the transaction timestamp is smaller (older) than any transaction currently holding the lock, the current transaction needs to wait; otherwise, the current transaction needs to be rolled back. For any two conflicting transactions Ti and Tj, the protocol uses the timestamp priority to decide whether to let Ti wait for Tj. If Ti has a lower priority (the timestamp is smaller) then Ti needs to wait, otherwise Ti rolls back. Therefore, the lock wait graph will not form a loop, and this method can avoid the occurrence of deadlock. This algorithm is a fusion of TO and Locking technology.

However, the distributed transaction processing mechanism implemented by the principle of 2PL either avoids deadlock (high rollback rate) or needs to solve the deadlock problem (resource deadlock and communication deadlock). The cost of resolving deadlocks in a distributed system is very high (the cost of resolving deadlocks on a single-machine system is already very high, and modern database systems based on multi-process or multi-threaded architectures may cause the system to almost stop service. In MySQL 5.6 and 5.7, the same data item is updated concurrently, and the deadlock detection operation will cause the system to almost stop service).

Not only deadlock detection consumes huge resources, but also the drawbacks of the lock mechanism itself have been criticized. Literature [5] believes that the disadvantages of the blocking mechanism are as follows (a clear understanding of these disadvantages prompted the author of Literature [5] to design OCC, Optimistic Concurrency Control, and optimistic concurrent access control):

1. The blocking mechanism is expensive: in order to ensure serializability, the locking protocol requires read-only transactions that do not change the integrity constraints of the database and requires concurrent write operations with read-locks and mutual exclusion to prevent others from modifying them; for those that may cause deadlocks The locking protocol also needs to endure the overhead caused by mechanisms such as deadlock prevention/deadlock detection.

2. The blocking mechanism is complex: In order to avoid deadlocks, various complex locking protocols need to be customized (such as when to lock, when can the lock be released, how to ensure strictness, etc.).

3. Reduce the concurrent throughput of the system:

a) Holding a lock in a transaction waiting for an I/O operation will greatly reduce the overall concurrent throughput of the system.

b) Before the transaction rollback is completed, the lock must be held when the locked transaction rolls back until the transaction rollback ends, which will also reduce the overall concurrent throughput of the system.

In addition, the use of the lock mechanism for mutual exclusion operations will cause time-consuming kernel-mode operations for the operating system and make the lock mechanism inefficient. This means that the 2PL technology with transaction processing semantics based on the operating system's lock mechanism is even more unusable (but there are also some technologies that are constantly improving the concurrent access control algorithm based on the blocking protocol).

 

5.2 Time stamp sorting protocol (T/O)

   

The Timestamp Ordering Protocol (TimestampOrdering, T/O) assigns timestamps to transactions at the beginning of the transaction, and sorts the transactions in the order of the timestamp [2]. When the execution of an operation violates the order that has been specified between transactions, the transaction in which the current operation is located will be rolled back or enter a waiting state.

The implementation of the T/O algorithm in 3TS follows the description in section 4.1 in [2]. You can refer to this document for more details. The following figure shows the implementation of a basic T/O algorithm.

Figure 4 T/O algorithm

 

5.3 Multi-version Concurrency Control Protocol (MVCC)

 

Multi-version Concurrency Control (MVCC) is a concurrent access control technology commonly used in current database management systems. It was first proposed in 1978 by David Patrick Reed [4]. The main idea is to combine a logical data item Expansion to multiple physical versions, the transaction operation on the data item is converted to the version operation, thereby improving the concurrency of transaction processing, and can provide the ability to read and write without blocking each other.

 

Multi version concurrent access control, multi-version concurrent access control technology, referred to as MVCC technology.

In 1970, the MVCC technology was proposed. In 1978, [4] "Naming and synchronization ma decentralized computer system" was further described. Later in 1981, the document [16] described the MVCC technology in detail, but the MVCC technology described was based on the timestamp. .

Multiversion T/O

: For rw sync hronization the basic T/Oscheduler can be improved using multiversiondata items [REED78]. For each data item x there is a set of R-ts'sand a set of (W-ts, value) pairs, called versions. The Rts's of x record thetimestamps of all executed dm-read(x) operations, and the versions record thetimes tamps and values of all executed dm-write(x) operations. (In practice onecannot store R-ts's and versions forever; techniques for deleting old versionsand timestamps are described in Sections 4.5 and 5.2.2.)

After that, MVCC technology was used extensively and many versions were derived.

In 2008, literature [13] was published and proposed the "SerializableSnapshot Isolation" technology to achieve serializable isolation level based on MVCC. This makes PostgreSQL V9.1 use this technology to achieve a serializable isolation level.

In 2012, literature [14] was published, and proposed the "Write-snapshotIsolation" technology to realize the serializable isolation level based on MVCC by verifying read-write conflicts. Compared with the method of detecting write-write conflicts, it improves the concurrency (some kind of write Conflicts are serializable). The author of this article made a system implementation based on HBase.

In 2012, literature [17] implemented SSI technology in PostgreSQL. This document not only describes the theoretical basis of serialized snapshots, PostgreSQL's implementation of SSI technology, but also proposes "Safe Snapshots" and "Deferable Transaction" to support read-only transactions, because it avoids read-write conflicts that cause transaction returns. The impact of rollback is to adopt the "safe retry" strategy for the transaction that is rolled back, and the impact of two-phase commit on the impact of the selected rollback read-write conflict transaction.

Literature [19] systematically discussed the four aspects of MVCC technology, namely: concurrent access control protocol, multi-version storage, old version garbage collection, index management, and discussed the principles of multiple variants of MVCC. After a variety of variants (MV2PL, MVOCC, MVTO, etc.), test and evaluate the effect of each variant on OLTPworkload. [18] discussed in detail the old version of MVCC garbage collection.

 

In 3TS, MVCC is realized based on the description in section 4.3 in [2], combined with T/O algorithm. Therefore, the transaction still uses the start timestamp for sorting. Unlike the traditional T/O algorithm, MVCC uses the feature of multiple versions to reduce the operation waiting overhead in T/O. The operation execution mechanism in MVCC is as follows (use ts to represent the timestamp of the current transaction):

1. Read operation

a) If ts is greater than the timestamp of all transactions in prereq, and there is a version in writehis (the version chain of the current data item), and its wts is greater than ts and less than pts, the current version can be returned, and the timestamp of the current transaction Deposit to readhis. If there is no corresponding timestamp for writehis, the timestamp of the current transaction is stored in readreq. The main reasons are:

  1. If there is a committed write between the pre-write transaction and the read operation, it means that the data read by the current read operation has been written by the write transaction, and the time stamp order is satisfied, and the version can be read;

  2. The timestamp of the read operation is larger than the timestamp of the current unfinished write transaction. New data should be read, so wait;

b) Otherwise, the current read operation reads the latest visible version through the timestamp, and stores the timestamp of the current transaction in readreq;

2. Write operation

a) If ts is less than the timestamp of all transactions in readhis, and there is a timestamp in writehis between rts and ts, data can be pre-written normally. If there is no eligible version in writehis, then the current transaction is rolled back;

b) Temporarily store the current write operation in prereq_mvcc;

3. Submit operation

a) Insert the current transaction timestamp and the new version written into writehis;

b) Delete the write operation of the current transaction from prereq;

c) Continue to execute read transactions that meet the time stamp sequence in readreq;

 

More concurrent access control algorithms, to be continued... Please stay tuned!

 

Thanks

Special thanks to the Tencent TDSQL team and the Key Laboratory of Data Engineering and Knowledge Engineering of the Ministry of Education of Renmin University of China for their support and help in this work, and thanks to Zhao Zhanhao, Liu Chang, Zhao Hongyao and other students for their contributions to this article.

 

Reference

[1] Rachael Harding, Dana Van Aken, Andrew Pavlo, Michael Stonebraker: AnEvaluation of Distributed Concurrency Control. Proc. VLDB Endow. 10(5): 553-564(2017)

[2]     Philip A. Bernstein, NathanGoodman: Concurrency Control in Distributed Database Systems. ACM Comput. Surv.13(2): 185-221 (1981)

[3] Daniel J. Rosenkrantz, Richard Edwin Stearns, Philip M. Lewis II:System Level Concurrency Control for Distributed Database Systems. ACM Trans.Database Syst. 3(2): 178-198 (1978)

[4] D. P. Reed. Naming and synchronization in a decentralized computersystem. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, USA,1978.

[5] H. T. Kung, John T. Robinson: On Optimistic Methods for ConcurrencyControl. ACM Trans. Database Syst. 6(2): 213-226 (1981)

[6] Theo Härder: Observations on optimistic concurrency control schemes.Inf. Syst. 9(2): 111-120 (1984)

[7] Hatem A. Mahmoud, Vaibhav Arora, Faisal Nawab, Divyakant Agrawal, AmrEl Abbadi: MaaT: Effective and scalable coordination of distributedtransactions in the cloud. Proc. VLDB Endow. 7(5): 329-340 (2014)

[8] Xiangyao Yu, Yu Xia, Andrew Pavlo, Daniel Sánchez, Larry Rudolph, Srinivas Devadas:

Sundial: Harmonizing Concurrency Control and Caching in a Distributed OLTPDatabase Management System. Proc. VLDB Endow. 11(10): 1289-1302 (2018)

[9] Stephen Tu, Wenting Zheng, Eddie Kohler, Barbara Liskov, Samuel Madden:Speedy transactions in multicore in-memory databases. SOSP 2013: 18-32

[10] Alexander Thomson, Thaddeus Diamond, Shu-Chun Weng, Kun Ren, PhilipShao, Daniel J. Abadi: Calvin: fast distributed transactions for partitioneddatabase systems. SIGMOD Conference 2012: 1-12

[11] Hal Berenson, Philip A. Bernstein, Jim Gray, Jim Melton, Elizabeth J.O'Neil, Patrick E. O'Neil: A Critique of ANSI SQL Isolation Levels. SIGMODConference 1995: 1-10

[12] Alan D. Fekete, Dimitrios Liarokapis, Elizabeth J. O'Neil, Patrick E.O'Neil, Dennis E. Shasha: Making snapshot isolation serializable. ACM Trans.Database Syst. 30 (2): 492-528 (2005)

[13] Michael J. Cahill, Uwe Röhm, Alan D. Fekete: Serializable isolationfor snapshot databases. SIGMOD Conference 2008: 729-738

[14] Maysam Yabandeh, Daniel Gómez Ferro: A critique of snapshotisolation. EuroSys 2012: 155-168

[15] https://en.wikipedia.org/wiki/Distributed_transaction

[16] P. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency Control andRecovery in Database Systems. Addison–Wesley, 1987.

[17] D. R. Ports and K. Grittner, “Serializable snapshot isolation inpostgresql,” PVLDB, vol. 5, no. 12, pp. 1850–1861, 2012.

[18] J.Böttcher, et al., ScalableGarbage Collection for In-Memory MVCC Systems, in VLDB,2019

[19] Yingjun Wu, Joy Arulraj, Jiexi Lin, Ran Xian, Andrew Pavlo:An EmpiricalEvaluation of In-Memory Multi-Version Concurrency Control. Proc. VLDBEndow. 10(7): 781-792 (2017)

[20] D. Lomet andM. F. Mokbel, “Locking key ranges with unbundled transaction services,” VLDB,pp. 265–276, 2009.

[21] Jinwei Guo, PengCai, Jiahao Wang, Weining Qian, Aoying Zhou: Adaptive Optimistic Concurrency Controlfor Heterogeneous Workloads. PVLDB 12(5): 584-596 (2019)

[22] X. Yu, A. avlo, D. Sanchez, and S.Devadas, “Tictoc: Time traveling optimistic concurrency control,” in Proceedingsof SIGMOD, vol. 8, 2016, pp. 209–220.

[23] Rudolf Bayer, Klaus Elhardt, Johannes Heigert, Angelika Reiser: Dynamic Timestamp Allocation for Transactions in Database Systems. DDB 1982: 9-20

Tencent Technology Official Exchange WeChat Group has been opened

Join the group and add WeChat: journeylife1900

(Remarks: Tencent Technology)

Guess you like

Origin blog.csdn.net/Tencent_TEG/article/details/110251374