The latest open source: 3TS Tencent Transaction Processing Technology Verification System (Part 2)

Author: Li Haixiang Tencent TEG database technology experts

Recently, Renmin University of China-Tencent Collaborative Innovation Lab officially held the unveiling ceremony. It is understood that the two parties have focused on the field of database basic research for many years of cutting-edge industry-university-research cooperation, as well as the database talent cooperation training plan, while advancing database security and control, while carrying out cutting-edge innovation research reserves for the future large-scale multi-scenario digital era. The laboratory's output including "full-time database system" and other achievements have been included in VLDB and other international top conferences, and at the same time applied for a number of national technology patents.

At the same time as the unveiling of the laboratory, the research team of Tencent and Renmin University of China also open sourced a new collaborative research result-3TS Tencent transaction processing technology verification system.

Tencent Transaction Processing Testbed System (3TS) is a verification system for database transaction processing jointly developed by Tencent's TDSQL team and the Key Laboratory of Data Engineering and Knowledge Engineering of Renmin University of China. The system aims to design and construct a unified framework for transaction processing (including distributed transactions), and through the access interface provided by the framework, it is convenient for users to quickly build new concurrency control algorithms; the test bed provided by the verification system can facilitate users according to According to the needs of application scenarios, perform a fair performance comparison of the current mainstream concurrency control algorithms in the same test environment, and select an optimal concurrency control algorithm. At present, the verification system has integrated 13 mainstream concurrency control algorithms, and provides common benchmark tests such as TPC-C, Sysbench, and YCSB. 3TS also further provides a consistency level test benchmark, aiming at the difficulty of system selection caused by the blowout development of distributed database systems at this stage, and provides consistency level discrimination and performance test comparison.

The 3TS system aims to deeply explore the relevant theories and implementation technologies of database transaction processing. Its core concept is: openness, depth, and evolution. Openness, adhering to the heart of open source, sharing knowledge and sharing technology; depth, practicing the spirit of systematic research, researching the essential issues of transaction processing technology, not breaking Loulan and never returning; evolution is a long way to go, I will Search up and down, keep moving forward, keep pushing forward.

 

In the previous chapter, we introduced the framework and basic content of 3TS (for details, please visit "Tencent and Renmin University of China Open Source Latest Research Achievements: 3TS Tencent Transaction Processing Technology Verification System" ). This chapter continues to introduce multiple concurrent access controls in depth. algorithm.

 

5. Concurrent access control algorithm provided by 3TS


5.4 Optimistic Concurrency Control Protocol (OCC, FOCC, BOCC)


Under the optimistic concurrency control protocol, the transaction execution process is divided into three stages: read, verify, and write stages [5], as shown in Figure 5.

Figure 5 Three-stage diagram of OCC algorithm

The advantages of these three stages are very obvious:

  1. High transaction processing performance: The improvement of transaction efficiency is mainly ensured by the non-blocking of reading and writing in the first stage, which greatly improves the concurrency of reading and writing, writing and reading, and multi-core hardware resources can be obtained Make full use of; and be friendly to read-only transactions because they are not blocked.

  2. Can avoid deadlock problem: OCC can avoid deadlock by sorting the read and write objects in the first stage and locking in order in the second stage. This wins in comparison with the blocking of concurrent access control algorithms to solve the deadlock problem. These two advantages enable OCC to still support high transaction throughput in scenarios such as processing distributed transactions, high data hotspots, and high communication delays. There is no obvious system performance jitter phenomenon in high concurrency scenarios (document [169] through experiments It shows that the OCC algorithm performance is not high under high competition).

  3. The correctness of data consistency is guaranteed: the correctness is guaranteed in the second phase of the verification phase. The principle is to construct a directed graph through the transaction conflict relationship to detect whether there is a loop, and roll back a transaction to break the loop. , To achieve the purpose of resolving transaction conflicts; and write-write conflicts are usually ensured by a blocking mechanism in the verification phase. However, there are different ways to implement engineering. For example, literature [9] improves the OCC algorithm. In the verification phase, it checks that the read set of this transaction is written by other concurrent transactions and triggers a rollback to avoid data inconsistency, so there is no need to construct a directed graph. Check whether there is a ring.

In 3TS, according to different verification mechanisms, three different optimistic concurrency control protocols are implemented: (1) OCC: the realization of ParallelValidation algorithm in [5]; (2) BOCC: the Backward Validation algorithm in [6] (3) FOCC: Implementation of the Forward Validation algorithm in [6]. It should be noted that in 3TS, since there is no global time stamp mechanism (subsequent plans will increase the global clock), the size of the read and write set that needs to be compared in the verification phase may be biased due to clock synchronization, which may cause different efficiency of different algorithms Degree of influence.

1. The three protocols deal with the read phase the same, mainly:

a) During the read operation, first store the read operation into the read set, and read the required data as required;

b) During a write operation, write the write operation to the write set

2. In the verification phase, the main idea of ​​the three protocols is to ensure that the transactions are sorted in the order of entering verification, and the operation results of the transaction are guaranteed to meet the order of entering verification by checking the read-write set. Different protocols have different methods for checking read and write sets.


5.4.1 OCC

The main flow of the verification operation is (the starting timestamp of the current transaction to be verified is start_ts, and the timestamp of entering verification is finish_ts):

a) Get the set of committed transactions in the time period (start_ts, finish_ts), record it as History, traverse the write set of the transaction in History, if there is an intersection with the read set of the current transaction, the current transaction verification fails;

b) Obtain the transaction set in the verification phase and mark it as Active, check whether there is an intersection between the write set of the transaction in the set and the read set of the current transaction. If it exists, the verification fails;


5.4.2 BOCC

The verification phase and the write phase are required to be executed in the same critical region. The process is to obtain the submitted transaction set during the (start_ts, finish_ts) time period and record it as History. Traverse the write set of the transaction in History. If it is the same as the current If the read sets of the transaction overlap, the current transaction verification fails. BOCC has obvious shortcomings, including read-only transactions that also need to be verified. Larger read sets of transactions to be verified have a greater impact on verification efficiency, and need to be retained for long transactions Write set of transactions committed during a large number of periods, etc.


5.4.3 FOCC

The verification phase and the write phase are required to be executed in the same critical section. Check whether the write set of the transaction to be verified and the read set of the currently active (read-write phase) transaction have an intersection, then the current transaction verification phase. Compared with BOCC, FOCC has the advantages that read-only transactions can skip the verification phase, and the cost of verification with active transactions is less.

3. The three protocols deal with the write phase in the same way, mainly: obtaining the submission timestamp, writing the centralized data into the database, and setting the submission timestamp of the data to the obtained submission timestamp.

 

5.5 Optimized optimistic concurrency control protocol (MaaT, Sundial, Silo)

The traditional optimistic concurrency control protocol determines whether the transaction can be submitted according to the order of entry verification. Compared with the traditional OCC, some optimized optimistic concurrency control protocols relax this requirement and reduce unnecessary rollbacks. Now, there are many improved versions based on OCC, such as ROCC [20], adaptive OCC [21], etc.

In 3TS, we have integrated three newer optimistic concurrency control algorithms, including MaaT, Sunidal and Silo. Expect more concurrent algorithms to be integrated into 3TS.

5.5.1 SIZE

MaaT [6] uses dynamic time stamp range adjustment to reduce the transaction rollback rate. The main idea is to determine the sequence of transactions through the relationship formed between read and write operations between transactions, thereby determining the sequence of transactions in the equivalent serial sequence required by serialization. For example, after the Ti transaction reads x, the Tj transaction needs to update x, so in the equivalent serial sequence, Ti needs to be ranked in front of Tj.

MaaT needs to maintain additional metadata on each data item, including: (1) Record the transaction IDs that have read the data item but have not yet committed, called the read transaction list readers; (2) Record the data item that needs to be written but still The uncommitted transaction ID is called the write transaction list writers; (3) The largest commit timestamp in the transaction that has read the data item is recorded as Rts; (4) The largest commit timestamp in the transaction that has written the data item , Marked as wts. Each transaction has a timestamp range [lower, upper), and is initialized to [0,+). The flow of each operation in the transaction mainly includes:

1. Read operation

a) Store the write transaction list of the data item in the uncommitted_writes of the transaction;

b) Update the greatest_write_timestamp=Max{greatest_write_timestamp,wts} of the current transaction;

c) Write the current transaction ID into the read transaction list of the read data item;

d) Read the corresponding data item and save the read data into the read set.

2. Write operation

a) Store the write transaction list of the data item in the uncommitted_writes_y of the transaction.

b) Store the read transaction list of the data item in the uncommitted_reads of the transaction.

c) 更新当前事务的greatest_write_timestamp=Max{greatest_write_timestamp,wts},greatest_read_timestamp=Max{greatest_red_timestamp,rts};

d) Write the current transaction ID into the write transaction list of the data item to be written;

e) Save the new value of the data item to be written into the write set;

3. Verification phase (the transaction coordinator determines the lower and upper based on the intersection of the lower and upper returned by all participants, and the following operations are performed on the participants):

a) 更新lower=Max{greatest_write_timestamp+1,lower};

b) Ensure that the lower of the transaction in uncommitted_writes (list of uncommitted write transactions) is greater than the upper of the current transaction;

  1. If the transaction in uncommitted_writes has been verified, modify the upper of the current transaction;

  2. Otherwise, put the transaction in uncommitted_writes into the after queue of the current transaction (the transaction in the queue needs to be submitted after the current transaction);

c) 更新lower=Max{greatest_read_timestamp+1,lower};

d) Ensure that the upper of the transaction in uncommitted_reads (list of uncommitted read transactions) is less than the lower of the current transaction;

  1. If the transaction in uncommitted_reads has been verified, modify the lower of the current transaction;

  2. Otherwise, put the transactions in the list into the before queue of the current transaction (the transactions in the queue need to be submitted before the current transaction);

e) Adjust the sequence of uncommitted_writes_y (list of uncommitted transactions with write-write conflicts) and the current transaction;

  1. If the transaction in uncommitted_writes_y has been verified, the lower of the modified current transaction is greater than the upper of the verified transaction in the list;

  2. Otherwise, put the transaction transaction in the list into the current transaction after queue;

f) Check whether lower<upper is established, if not, roll back the current transaction;

g) Coordinate and adjust the lower of the current transaction and the upper of the transaction in the before queue to ensure that the lower of the current transaction is greater than the upper of the before queue transaction;

h) Coordinate and adjust the upper of the current transaction and the lower of the transaction in the after queue to ensure that the upper of the current transaction is smaller than the lower of the transaction in the after queue;

4. Write phase (first determine the submission timestamp (commit_ts) on the coordinator as the lower of the final timestamp interval, and then perform the following operations on the participants)

a) For each element in the read set, clear the current transaction from the read transaction list of the corresponding data item, and perform the following operations:

  1. Ensure that the lower of the transaction in the write transaction list is greater than the commit_ts of the current transaction;

  2. 更新Rts=Max{commit_ts,Rts};

b) For each element in the write set, clear the current transaction from the write transaction list of the corresponding data item, and perform the following operations:

  1. Ensure that the upper of the transaction in the write transaction list is less than the commit_ts of the current transaction.

  2. Ensure that the upper of the transaction in the read transaction list is less than the commit_ts of the current transaction.

  3. Update Wts=Max{commit_ts,Wts}.


5.5.2 Sundial

Sundial [8] dynamically calculates the submission timestamp to reduce the rollback rate. At the same time, the lease (that is, the logical time range of the data item that can be accessed) is maintained on the data item, so that the order of transactions can be quickly determined when a conflict occurs. In addition, on the basis of optimistic concurrency control, Sundial combines the idea of ​​pessimistic concurrency control, using OCC for read/write conflicts and 2PL locks for write/write conflicts to reduce the overhead of distributed transaction coordination and scheduling.

Sundial maintains leases (wts, rts) on data items, which represent the last time the data item was written and the latest time the data item can be read. Maintain commit_ts on the transaction, which represents the commit timestamp of the transaction. The orig.rts and orig.wts are additionally maintained in the read-write center, which represents the rts and wts at the time when the data item is accessed. We introduce the execution process of Sundial's main operations as follows:

1. Read operation

a) First read the required data items from the read-write set, if the required data items do not exist in the read-write set

  1. You need to access the data storage, find and read the corresponding data item, and record the wts and rts of the data item at this time, which are recorded as orig.wts and orig.rts;

  2. Update the commit_ts of the current transaction=max{orig.wts,commit_ts};

b) If the required data exists in the read-write set, return the corresponding data directly;

2. Write operation

a) First find the data item to be modified from the write set, if the required data item does not exist in the write set

  1. Lock the tuple, if the lock fails, store it in the waiting queue waiting_set;

  2. Otherwise, directly return the data item, and the corresponding wts and rts, recorded as orig.wts and orig.rts;

b) If the corresponding element of the data item currently to be updated exists in the read-write set, it will be updated in the write set;

c) Update the commit_ts=max{orig.rts,commit_ts} of the current transaction;

3. Verification phase

a) First calculate the submission timestamp commit_ts, mainly through the following two steps (this step is added in 3TS, because the read and write operations are performed on the participants, the coordinator needs to summarize the information of all participants before entering the verification to get commit_ts ):

  1. Traverse the write set and update the orig.rts of all elements in the commit_ts greater than or equal to the write set;

  2. Traverse the read set, update commit_ts greater than or equal to orig.wts of the read set;

b) Verify each element in the reading set:

  1. If the submission timestamp commit_ts is less than rts, skip the current element;

  2. Try to update the tuple lease: (1) If orig.wts! =wts, that is, the wts read at that time is different from the current wts of the tuple, and the current transaction needs to be rolled back; (2) If the current tuple is locked, the current transaction rolls back; (3) Otherwise, update the rts of the tuple= Max{rts,commit_ts};

4. Writing phase

a) Submit the operation, update and unlock the data items corresponding to the elements in the write set;

b) The rollback operation unlocks the data item corresponding to the element in the write set.

 

5.5.3 Silo

The main difference between Silo [9] and the traditional optimistic concurrency control protocol is in the verification phase. The main idea is to verify whether the data you read has been modified by other transactions. Therefore, the verification process of the transaction is:

  1. Lock all data items corresponding to the elements in the write set;

  2. Verify the data in the read set: (1) Modified by another transaction or (2) Locked by another transaction. If there is one of the two situations, the current transaction is rolled back;

  3. Obtain the submission timestamp and enter the write phase.

 

5.6 Deterministic Concurrency Control Protocol (Calvin)

The main idea of ​​Calvin [10] is to determine the order of transactions in advance, and then the transactions will be executed in strict accordance with the determined order. Avoid the distributed coordination overhead required by other concurrency control protocols.

The Calvin algorithm needs to add two modules: sequencer and scheduler. The sequencer is used to intercept transactions and specify the order of these transactions (the order is the order in which the transactions enter the sequencer), and the scheduler is responsible for executing the transactions in the order given by the sequencer.

The Calvin transaction execution process mainly includes (assuming that the transaction needs to use the data of Server1 and Server2):

  1. Client sends the transaction to Server1 node;

  2. Server1's sequencer receives the transaction and puts the transaction into a batch;

  3. After the time specified by the batch, the sequencer sends the batch containing the transaction to the scheduler of the two participant nodes Server1 and Server2 corresponding to the transaction;

  4. The scheduler of Server1 and Server2 receives the batch, and locks according to the order specified by the batch in advance. Then put the transaction in the batch into WorkThread (work thread) for execution;

  5. After Server1 and Server2 finish executing all the transactions in the batch, they send the return message to Server1;

  6. Server1 returns to the Client that the transaction is completed.

The locking mechanism during transaction execution still follows the logic of 2PL, mainly including:

1. Read operation:

a) Check whether there is an exclusive lock on the data item, and check whether the waiters list of the data item is empty. If there is no exclusive lock and the waiters list is empty, read the corresponding data item and put the current transaction into the owner;

b) Otherwise, there is a conflict in locking, and the current transaction is stored in the waiters waiting list of transactions;

2. Write operation:

a) Check whether there is a lock on the data item, and check whether the waiters transaction of the data item exists. If there is no exclusive lock and the waiters list is empty, put the current transaction into the owner;

b) Otherwise, there is a conflict in locking, and the current transaction is stored in the waiters waiting transaction list.

 

5.7 Concurrency control protocol based on snapshot isolation (SSI, WSI)

Snapshot Isolation (SI) [11] mainly restricts write-write conflicts and write-read conflicts on the same data item. For write-write conflicts, it stipulates that data items cannot be modified concurrently by two transactions at the same time. In addition, following the "first committer wins strategy", the write transaction submitted first will succeed and the other transaction will be rolled back. For write-read conflicts, it stipulates that the transaction can only read the latest committed data item version, that is, read the data that meets the consistency state at the beginning of the transaction. Therefore, it has the transaction processing characteristics of non-blocking read and write. The SI mechanism itself cannot be serialized. Therefore, to achieve serialization on the basis of SI, additional operations need to be introduced. In 3TS, two mainstream serializable snapshot isolation mechanisms are implemented: (1) SSI: Serializable Snapshot Isolation; (2) WSI: Write Snapshot Isolation.


5.7.1 SSI

If transaction Ti reads x and transaction Tj writes a new version of x, then we say that Ti reads and writes depend on Tj. SSI [12,13] proved through theoretical proofs that to achieve serialization on the basis of SI, it is only necessary to prohibit Ti read and write dependent on Tj, and Tk read and write dependent on Ti. The core of the algorithm is to dynamically detect this situation, so two fields inConflict and outConflict are recorded in each transaction: inConflict records the transactions that read and write depends on the current transaction, and outConflict records the transactions that the current transaction reads and writes on. When it is found that these two fields of the current transaction are not empty, the current transaction is rolled back immediately, thereby ensuring serialization.

2 WSI

WSI [14] achieves serialization by transforming the detection of write-write conflicts into the detection of read-write conflicts and avoids read-write conflicts.

For each transaction, WSI needs to maintain its read set and write set. In order to avoid illusions, query predicates are placed in the read set for range queries. For each record, the last commit timestamp needs to be maintained. Whenever a transaction is committed, the last commit timestamp of all modified rows will be updated as the commit timestamp of the transaction. The check before the transaction commits is as follows: check the data items corresponding to all the elements in the read set, and if its last commit timestamp is greater than the start timestamp of the current transaction (elimination of read-write conflicts), roll back the current transaction.

 

5.8 Concurrent access control algorithm based on dynamic timestamp

Fifth point, we introduced some OCC improved algorithms, among which MaaT and Sundial mentioned are a way to use the OCC framework and combine the TO algorithm to improve. However, they are not just based on TO. The traditional TO algorithm is a static algorithm, and the time stamp is deterministic and rigid. And MaaT, Sundial, and Tictoc [22], etc., use a dynamic timestamp allocation algorithm. This combines the advantages of the OCC framework (strategy) with the advantages of dynamic time stamping.

Dynamic timestamp allocation (DTA) was first proposed in the literature [23], and later cited and applied in many literatures. The core idea of ​​this algorithm: does not rely on a centralized time stamp mechanism, according to the conflict relationship of concurrent transactions on the data item, and dynamically adjust the execution time period of the transaction on the data item to realize the serialization of the global transaction. This algorithm avoids some cases that are considered as conflicts and are rolled back under the non-dynamic timestamp allocation algorithm.

Literature [22] introduced an algorithm called "Time Traveling Optimistic Concurrency Control (TicToc)". This algorithm is based on the OCC algorithm and proposes the idea of ​​"data-driven timestamp management", that is, not assigning independent ( Global) timestamp, instead of embedding necessary (local) timestamp information when accessing data items, it is used to calculate a valid commit timestamp for each transaction before committing, and it is calculated (not pre-allocated) The commit timestamp is used to resolve concurrency conflicts to ensure that the transaction is serializable. Because it is not necessary to rely on the global coordinator to assign timestamps to the transaction at the start and commit phase of a distributed transaction, at this stage, the purpose of decentralization can be achieved. Due to the combined use of the OCC mechanism, the execution time period of transaction conflict overlap can be reduced, and the degree of concurrency can be improved.

Literature [7] implements the DTA algorithm based on the OCC framework, that is, the aforementioned MaaT algorithm (Section 5.5), which will not be expanded here.

 

6. 3TS features to be improved


The 3TS system provides a unified technology development platform, which can compare and analyze multiple concurrent access control algorithms. At present, there are still the following areas to be improved, which will affect different concurrency control protocols to varying degrees and affect the accuracy of experimental results. We mainly summarized the following functional points to be improved:

  1. The message communication mechanism can be considered to replace the existing message communication mechanism by means such as RPC, so as to reduce the impact of waiting in the message queue on transaction performance.

  2. Thread scheduling model (one thread is bound to one core), you can consider introducing more scheduling models to help analyze the impact of thread scheduling methods on the performance of concurrency control protocols.

  3. SQL statements are not supported, and operations such as SQL parsing need to be introduced to better simulate real database scenarios.

  4. Not all TPCC transactions are supported, and transaction types such as Delivery need to be further introduced to support all TPCC tests.

  5. Global time, there is no global time stamp generation module, and the use of machine local time stamp may cause clock deviation, which will affect OCC and other protocols.

  6. Deadlock detection algorithm, you can consider introducing a deadlock detection algorithm to better analyze other 2PL protocols.

  7. Each algorithm in Deneva cannot be dynamically switched. Each algorithm uses macros (C language macros) to switch. This requires the system to dynamically compile and run each time the system switches the algorithm. This is a point to be improved.

 

Thanks

Thanks to the Tencent CynosDB (TDSQL) team and the Key Laboratory of Data Engineering and Knowledge Engineering of the Ministry of Education of Renmin University of China for their support of this work, and thanks to Zhao Zhanhao, Liu Chang, Zhao Hongyao and other students for their contributions to this article.

 

Reference

[1] Rachael Harding, Dana Van Aken, Andrew Pavlo, Michael Stonebraker: AnEvaluation of Distributed Concurrency Control. Proc. VLDB Endow. 10(5): 553-564(2017)

[2]     Philip A. Bernstein, NathanGoodman: Concurrency Control in Distributed Database Systems. ACM Comput. Surv.13(2): 185-221 (1981)

[3] Daniel J. Rosenkrantz, Richard Edwin Stearns, Philip M. Lewis II:System Level Concurrency Control for Distributed Database Systems. ACM Trans.Database Syst. 3(2): 178-198 (1978)

[4] D. P. Reed. Naming and synchronization in a decentralized computersystem. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, USA,1978.

[5] H. T. Kung, John T. Robinson: On Optimistic Methods for ConcurrencyControl. ACM Trans. Database Syst. 6(2): 213-226 (1981)

[6] Theo Härder: Observations on optimistic concurrency control schemes.Inf. Syst. 9(2): 111-120 (1984)

[7] Hatem A. Mahmoud, Vaibhav Arora, Faisal Nawab, Divyakant Agrawal, AmrEl Abbadi: MaaT: Effective and scalable coordination of distributedtransactions in the cloud. Proc. VLDB Endow. 7(5): 329-340 (2014)

[8] Xiangyao Yu, Yu Xia, Andrew Pavlo, Daniel Sánchez, Larry Rudolph, Srinivas Devadas:

Sundial: Harmonizing Concurrency Control and Caching in a Distributed OLTPDatabase Management System. Proc. VLDB Endow. 11(10): 1289-1302 (2018)

[9] Stephen Tu, Wenting Zheng, Eddie Kohler, Barbara Liskov, Samuel Madden:Speedy transactions in multicore in-memory databases. SOSP 2013: 18-32

[10] Alexander Thomson, Thaddeus Diamond, Shu-Chun Weng, Kun Ren, PhilipShao, Daniel J. Abadi: Calvin: fast distributed transactions for partitioneddatabase systems. SIGMOD Conference 2012: 1-12

[11] Hal Berenson, Philip A. Bernstein, Jim Gray, Jim Melton, Elizabeth J.O'Neil, Patrick E. O'Neil: A Critique of ANSI SQL Isolation Levels. SIGMODConference 1995: 1-10

[12] Alan D. Fekete, Dimitrios Liarokapis, Elizabeth J. O'Neil, Patrick E.O'Neil, Dennis E. Shasha: Making snapshot isolation serializable. ACM Trans.Database Syst. 30 (2): 492-528 (2005)

[13] Michael J. Cahill, Uwe Röhm, Alan D. Fekete: Serializable isolationfor snapshot databases. SIGMOD Conference 2008: 729-738

[14] Maysam Yabandeh, Daniel Gómez Ferro: A critique of snapshotisolation. EuroSys 2012: 155-168

[15] https://en.wikipedia.org/wiki/Distributed_transaction

[16] P. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency Control andRecovery in Database Systems. Addison–Wesley, 1987.

[17] D. R. Ports and K. Grittner, “Serializable snapshot isolation inpostgresql,” PVLDB, vol. 5, no. 12, pp. 1850–1861, 2012.

[18] J.Böttcher, et al., ScalableGarbage Collection for In-Memory MVCC Systems, in VLDB,2019

[19] Yingjun Wu, Joy Arulraj, Jiexi Lin, Ran Xian, Andrew Pavlo:An EmpiricalEvaluation of In-Memory Multi-Version Concurrency Control. Proc. VLDBEndow. 10(7): 781-792 (2017)

[20] D. Lomet and M. F. Mokbel, “Locking key ranges with unbundledtransaction services,” VLDB, pp. 265–276, 2009.

[21] Jinwei Guo, Peng Cai, Jiahao Wang, Weining Qian, Aoying Zhou: Adaptive Optimistic Concurrency Control for Heterogeneous Workloads. PVLDB12(5): 584-596 (2019)

[22] X. Yu, A. avlo, D. Sanchez, and S.Devadas, “Tictoc: Time traveling optimistic concurrency control,” in Proceedingsof SIGMOD, vol. 8, 2016, pp. 209–220.

[23] Rudolf Bayer, Klaus Elhardt, Johannes Heigert, Angelika Reiser: Dynamic Timestamp Allocation for Transactions in Database Systems. DDB 1982: 9-20

Guess you like

Origin blog.csdn.net/Tencent_TEG/article/details/110507725