How to efficiently manage the cache? --LoopBuffer

Efficient memory management case we need a buffer structure, can not predict the size of the data. Every data comes guaranteed valid write, even if the dynamic expansion of the existing data memory will not have any diversion operations. Data can only be read when the reading sequence, nor will the read data is not moved.

CppNet data streams buffered by the CBuffer implementation class, in the actual data storage CLoopBuffer in, loop buffer implemented as the name suggests, to read and write sequence of operations by moving a pointer on a fixed amount of memory.

Each loop buffer are in possession of a piece of memory from fixed-size memory pool. Then four pointers to strictly identify the position data, note that this is strictly identified, so we do not have to apply to the memory memset initialization, each read and write to control the flow of data through the movement of the pointer, the following focuses on the movement of said pointer several :

start: points to the start address of the allocated memory.
end: end address pointer to the allocated memory.
read: read the current cursor.
write: write the current cursor.
When the loop buffer is first created, the pointer position shown in Figure 1:

figure 1

start, read, write three hands point to the starting location in memory, when the read = write this data can be read is null. Next, data write, shown in Figure 2:

figure 2

write pointer starts moving to the right, the recording position of the next write. Amount of data that can be read is now write - read, the remaining memory size is writable end - write.

Next we once read data, shown in Figure 3:

image 3

read pointer begins to move to the right, the read data amount is read - start, remaining data size is read write - read, the remaining memory size is writable end - write + (read - start).

Next we will read out all the data, shown in Figure 4:

Figure 4

moved to the right until the read pointer overtook the write, read and now == write, when the read and write pointers are equal, when there are two cases, either the memory is full, or memory block is empty, an additional member variables identified. Now read to catch up with the write, the memory block is empty, you can read the data size is 0, writable memory size is the size of the entire block, in order to make a more complete write cache in order to facilitate the call writev and readv each time read write pointer pointer to catch up, we are all pointers reset state, returns to the state of FIG.

Then there are new data arrives, shown in Figure 5:

Figure 5

We have seen the left write to read and write since been moved to the right time, when the pointer points to the end, you will need to re-adjust the point to start, this is the origin of the loop, but this time there are a lot between read and start the distance from the start and then we start writing data, write again begins to move to the right. Now readable data size end - read + (write - start), the size of the write data can be read - write.

Or if the next data is written, then, write it will move to the right has been caught up read. Then read == write, but the memory has been filled.

In line with call readv, the need for an interface can return to the starting position and the size of the current writable memory, we can observe that there are two cases by several processes mentioned above:

1> FIG 1, FIG 2, FIG. 5, when (FIG. 4 are reset to the state in FIG. 1), only a write buffer, a write start address pointer and a length of read - write or end - write.
When 2> FIG 3 there are two writable area, a write start address and start, respectively, is writable length end - write and read - start. When all of the need to return writev data area, and a pointer operation similar to the above, but just the opposite, not described in detail.

All the circumstances of more than a few process loop buffer is written and read, you can see every time data is written and read only when necessary, data replication, and did not move the operation to copy other data, and each data when flowing, it will not exceed the limit of the memory area.

But only fixed-size loop buffer memory, as well as a new data write request if filled with how to do after? This is the time of the buffer performance.

And in fact is very similar to the CBuffer CLoopBuffer achieved, but also to control the read and write data through the four pointers, each pointer action even are identical thereto, but the specific location in the memory block pointer CLoopBuffer, and CBuffer the pointer to the memory block CLoopBuffer. Therein through a one-way linked list of all memory blocks managed node, when data is not full, a few pointers and pointer movement operation identical to the loop buffer.
The only difference is that, when all of the memory blocks when they were filled, read == write, time CBuffer need to apply a new memory pool from the memory block, and add it to the list.

There is more than one way to achieve the problem, not sequential application CLoopBuffer pointer can not be determined by comparing the sequence of read and write pointer address, each CLoopBuffer in the realization of which itself carries an index queue, each find all the time need to override operator <or> to determine the order of calling the relationship, when valgrind performance analysis found here call frequency is so high that re-optimize the implementation of CBuffer.

How CBuffer after reconstruction with a singly linked list to manage the loop buffer, when data is written is not enough space from the memory pool request a new node is added back to the list, write pointer moves backward. When reading data, the current loop buffer once data read is completed all of the nodes, the current block is returned to the memory pool, read pointer moves backwards. To implement like a NES game Mario pontoon over, stepped on each brick will destructor off the front will generate a new brick makes up the boardwalk. The whole process is read from left to right during the movement sequence.

These are the core CppNet cache management implementation.

github please poke here .

Guess you like

Origin juejin.im/post/5d92e8b451882532ce31369c