Transaction Processing of Redis Cache Technology Learning Series

In the first article of this series, we mainly learned about "keys" and "values" in Redis. We can notice that Redis is a database with C/S architecture. In our current understanding, it is stored and read through commands in the terminal, that is, it is a very typical "request-response" "Model. But we know that in practical applications, we may have to face more complex business logic, because there is no concept of tables in traditional relational databases in Redis, so in the process of using Redis, we have to face two A practical problem, that is, how to better maintain the "keys" in the database, and how to ensure the successful execution of commands while executing commands efficiently. For the former, I think it's a design problem, and for the latter, I think it's a technical problem. Therefore, the core content of this article is to find the answers to these two questions. Starting with such a question, we can formally enter the topic of this article: transaction processing in Redis.

Starting from database transactions

​Usually when we refer to databases, we inevitably mention transactions, so what is a transaction? A transaction is a series of operations performed as a single logical unit of work. So, first of all, a transaction is a series of operations, and this series of operations is binary, that is, it is completely executed or not executed at all. Transaction processing thus ensures that data-oriented resources are not permanently updated unless all operations within the transaction unit complete successfully. Let's take an example here. In addition to query operations in the database, the three operations of Insert, Delete and Update will affect the data, because transaction processing can ensure that a series of operations can be completely executed or It is not executed at all, so after a transaction is committed, any SQL statement in the transaction will generate an undo log (Undo Log) when it is executed, and what is recorded in the undo log is exactly the opposite of the current erasure. Operations, such as the opposite of delete is insert, the opposite of insert is delete, etc. What we usually call transaction rollback is actually to perform the opposite operations in these latch logs, which also tells us that only a series of operations in the transaction can be rolled back when they are completely executed. If it is caused by an unexpected situation A series of operations in the transaction are not fully executed. At this time, we cannot guarantee that the data can be rolled back.

​In database-related theory, in order for a logical unit of work to become a transaction, it must satisfy ACID, namely atomicity, consistency, isolation, and durability. (1): The concept of atomicity actually means that all SQL operations in a transaction are a whole, so only if all SQL operations are completely executed successfully, the transaction party can consider that the submission is successful. If an SQL statement fails to execute during the transaction commit, the entire transaction must be rolled back to the state before the transaction was committed. (2): The concept of consistency means that when a transaction is completed, all data must be kept in a consistent state, and when it is implemented into each component of the database, developers are required to ensure that data, indexing , constraints, logs, etc. are consistent before and after the transaction. (3): The concept of isolation is mainly aimed at concurrency. The core idea is that the modification of data generated by different concurrent transactions must be isolated from each other. Assuming that there are two different transactions A and B executing concurrently, then for A, It has only two states before execution, namely before B is executed and after B is executed. In the same way, the same is true for B, and this feature is called isolation. (4): Persistence is relatively simple, which means that after the transaction is completed, its impact on the data is permanent.

Transaction Processing in Redis

​Well, as of now, we have a basic understanding of the related theory of transaction processing in the database. Maybe database systems in this world are very different, but I believe that they will eventually reach the same goal in terms of transaction processing, like We solve the conflict problem in the concurrent process. The conventional method is still locking. This is the reason why I spend my energy to understand and explain these theoretical knowledge. Technology can be described as changing with each passing day. If we are exhausted, then maybe we will gradually lose our love for this industry. I believe that principles are always more important than frameworks. I have never systematically studied computer courses. This thing makes me quite regretful. A transaction in Redis can be regarded as a queue, that is, we can start a transaction through MULTI, which is equivalent to declaring a command queue. Next, every command we submit to Redis will be enqueued into this command queue. When we enter the EXEC command, the current transaction will be triggered, which is equivalent to taking the command from the command queue and executing it, so a transaction in Redis will go through three stages: starting the transaction, entering the command, and executing the transaction from start to execution. Here is a simple example of using transactions in Redis:

127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> SET Book_Name "GIt Pro"
QUEUED
127.0.0.1:6379> SADD Program_Language "C++" "C#" "Jave" "Python"
QUEUED
127.0.0.1:6379> GET Book_Name
QUEUED
127.0.0.1:6379>




We can notice that a transaction in Redis is basically the same as a transaction in the usual sense, that is, a

transaction is a single logical work execution unit composed of a series of operations. In particular, because commands are stored in a queue in Redis, all commands in a transaction are executed in order, and the execution of the transaction will not be interrupted by other commands sent by the client.
A transaction is an atomic operation, and the commands in a transaction have only two execution results, that is, all executions or none of them are executed. If the client does not execute the EXEC command accidentally after opening the transaction with the MULTI command, all commands in the transaction will not be executed. Similarly, if the client executes the EXEC command after using the MULTI command to open the transaction, all commands in the transaction will be executed.
Transactions in Redis can use the DISCARD command to empty a command queue and abort execution of the transaction. If an error occurs when the command is enqueued, Redis will refuse to execute and cancel the transaction when the client calls the EXEC command, but if the error occurs after the EXEC command is executed, Redis will choose to ignore it automatically.
We know that the common concurrency control schemes mainly include pessimistic locking and optimistic locking. Here, we will first explain these two concepts. The so-called pessimistic lock, as the name implies, is a pessimistic strategy. Pessimistic lock believes that it should be locked before any record is modified. If the lock fails, it means that the record is being modified, and an exception should be thrown at this time; If the lock is successful, the record is modified and unlocked after the transaction is completed; if someone else modifies it, it should wait for the current modification to be unlocked or throw an exception. The so-called optimistic locking, as the name implies, is an optimistic strategy. Optimistic locking believes that every time the data is searched from the record, others will not modify it, so there is no need to lock in this process, but when the record is updated, the version number will be passed. To determine whether others have modified the current record.

Generally speaking, optimistic locks are suitable for occasions with relatively few write conflicts, and pessimistic locks are suitable for occasions with relatively many write conflicts. Redis provides a mechanism called check-and-set, which is mainly implemented through the WATCH command. The principle is based on the optimistic locking strategy. Redis will check the value corresponding to the monitored key before executing the EXEC command. Whether there is a change, if the value changes, indicating that someone has modified the value stored in this key, Redis will automatically cancel the current transaction. Let's look at this simple example:

WATCH Record_Count
val = GET Record_Count
val = val + 1
MULTI
SET Record_Count $val
EXEC

In this example, we try to do an auto-increment operation on Record_Count in a transaction, this code is in non-concurrent There is no problem in this case, but in the case of concurrency, if a user modifies the value of Record_Count before executing the EXEC command, then our result at this time will be 1 less than the expected result, and now we have WATCH, Redis will monitor Record_Count. When Redis monitors that the value changes, the transaction will be automatically canceled to avoid conflicts.

Talk about the management of keys in Redis

​In fact, from a pertinent point of view, this blog basically explains the problem of transaction processing. Therefore, although this blog does not bring you many surprises, it can still end the topic very well, but because some friends blogged before. I left a message and asked about the key management of Redis, so the blogger decided to briefly discuss this issue here. Since the blogger is just as new to Redis as everyone else, the following opinions are just the words of one family. If you have any questions, you can Leave a message in the blog, welcome everyone to criticize and correct. I think there are basically two strategies for key management in Redis, namely lazy deletion and periodic deletion, and in fact this is the default key deletion strategy of Redis: Redis

uses two strategies of lazy deletion and periodic deletion to delete expired keys. Key: The lazy deletion strategy deletes the expired key when it encounters it, and the regular deletion strategy actively searches for and deletes the expired key at regular intervals.
Therefore, based on these two key deletion strategies, we can think of the following approaches:

temporary keys can be used to store temporary variables, an expiration time can be set globally in the database, and Redis will automatically delete the keys after the keys expire.
For persistent data, ordinary keys can be used to store, and the client can actively delete the key through a protocol defined between the server and the client.
For keys in different modules, a unified standard naming rule is adopted to name keys, so as to solve the problem of confusion in key management in Redis.
Design a reasonable key recycling mechanism to prevent Redis from using more than 95% of the memory, or actively trigger Redis to eliminate keys by setting the maximum memory capacity in Redis and its memory policy. For details, please refer to:
Sunnyxd


Source code: minglisoft.cn/technology

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326491769&siteId=291194637