Redis performance analytic --Redis Why so fast?

echo edited, welcome to reprint, please declare the source of the article. Welcome to add echo micro letter (Micro Signal: t2421499075) the exchange of learning. Battle undefeated, according to not claiming victorious, defeat after defeat is not decadent, according to energy struggling to move forward. - This is the real rated power! ! !


Redis is practical application because of its performance, in many Redis cache is a relatively fast middleware, and it is single-threaded operation, had no memory overhead, to the program brought more room for expansion.

Redis performance show

In the case of the network to ensure smooth, and the same CPU Redis same version, the processing of data of different sizes, as shown in FIG Redis throughput, the official website Redis from FIG. We can see in the site. Redis when dealing with 1000 bytes of data, are stable in a certain position 10w, when the data processing time of increasing the throughput begins to fall slowly.
Here Insert Picture Description
Redis pictures from official website

The figure is QPS test chart provided by official data is up to 100,000 + of QPS (queries per second).
Here Insert Picture Description
Redis pictures from official website

Redis Why so fast?

  • KV pure memory operation
  • The interior is one way to achieve (no need to create / destroy threads, to avoid context switching, concurrent resource no competition issues)
  • Asynchronous non-blocking I / O (multiplexed)

Where keep operating fast in memory KV?

From the above description we have inside us to see Redis is a pure kv operation. Redis and the vast majority of requests are pure memory operation, it is very fast. Data stored in memory, and there is hashMap type, then why so fast? Together we can look at the comparison of several commonly used data structures, and their advantage.

data structure operating time complexity
List insert O (N)
List select O (1)
Set insert O (1)
Set select O (1)
HashMap insert O (1)
HashMap select O (1)

From the graph we can see, HashMap advantage is to find the time and complexity of the operation is O (1), so the internal Redis This structure can obtain sufficient advantage fundamentally When asked, Redis is not only fast data structure of achievement, as well as one-way and asynchronous I / O

Redis Why use a single thread?

Redis using a single thread is enough! We can see a description of the figure below official website, Redis use bottleneck is not CPU, memory, and it is mainly limited by the network. For example, using a pipeline running Redis on Linux systems can usually send one million requests per second, so if your application is predominantly O (N) or O (log (N)) command, you almost never use excessive CPU.
Here Insert Picture Description

We can see from the description, Redis when in use, the use of single-threaded CPU has been able to obtain sufficient resources to use Redis, the main bottleneck is the memory. But why the single-threaded able to do it so fast?

Redis use single-threaded, multi-threaded fast compared to where?

From the description above, we see the official website, Redis bottleneck is not a thread, not get CPU resources, so if you use multiple threads will bring extra resource usage. Context switch, competition for resources, operation of the lock.

  • Context switching
    • The context is not difficult to understand, it is the CPU registers and program counter. The main role is to store the thread resources are not allocated to the multi-threaded operation, when not every thread can get directly to the CPU resources, we have been able to see the ability to run many programs on our computers, should be multi-threaded execution and CPU constantly switching on multithreading. But there is always a thread gets to resources, there are always a thread to wait for access to resources, this time, the waiting threads acquire resources need to be suspended, that is, our storage. This time our context arises when our context again aroused, get resources when is our context switching.
  • Competition for resources
    • Competition for resources is relatively easy to understand, CPU context switch is actually a resource in batches, but before the handover, where a context switch in the end, is the beginning of competition for resources. I redis due to be single-threaded, so all operations are not related to the competition for resources.
  • Lock consumption
    • In the case of multi-threading is concerned, can not be avoided is the lock problem. If multiple threads that appear concurrently, it may lead to inconsistent data, or operations not achieve the desired effect. This time we need to lock to solve these problems. When we thread a lot of time, you need to constantly lock, the lock is released, the operation will consume a lot of our time

I / O multiplexing, nonblocking mode

For I / O blocking can have a lot of people do not know, blocking I / O operations in the end is how caused, Redis is how to solve it?

  • I blocked / O operations: When a user thread issues an IO request, the kernel to see if data is ready, if not ready will be ready to wait for data, and the user will thread is blocked, the user thread to hand over the CPU. When the data is ready, the kernel will copy the data to the user thread, thread and return the results to the user, the user thread before lifting block state.
  • Redis in multiplex: I / O multiplexer is in fact a single thread to manage a plurality of I / O streams to track the status of each of our sock (I / O stream) by recording. select, poll, epoll all I / O multiplexing specific implementation. epoll performance is better than several other persons. Redis all functions of I / O multiplexing through common packaging select, epoll, evport kqueue and these I / O multiplexer implemented library.

To be a bottom line of the main blog

Guess you like

Origin www.cnblogs.com/xlecho/p/11832118.html