Detailed memory pool design and implementation

I. Introduction

As a C++ programmer, I must be familiar with and frequently operate memory operations;

For example, apply for an object, use new, apply for a piece of memory to use malloc, etc.;

However, there are often some troubles and troubles for everyone, mainly in two parts:

  • Forgot to release after applying for memory, causing memory leak
  • Memory cannot be recycled, causing a lot of memory fragmentation

These two reasons will affect the long-term stable operation of our program, and may also cause the program to crash;


Second, the memory pool

Memory pool is a form of pooling technology. Usually when we write programs, we use new delete keywords to apply for memory from the operating system, and the consequence of this is that every time we apply for memory and release memory, we need to deal with the operating system system calls, from the heap Allocate the required memory. If such operations are too frequent, a large number of memory fragments will be found, which will reduce the memory allocation performance, and even memory allocation failures may occur.

The memory pool is a technology created to solve this problem. From the concept of memory allocation, memory application is nothing more than asking for a pointer from the memory allocator. When requesting memory from the operating system, the operating system needs to perform complex memory management and scheduling before it can correctly allocate a corresponding pointer. In this process of distribution, we also face the risk of failure in distribution.

Therefore, every memory allocation will consume the time to allocate memory. Let this time be T, then the total time consumed for n allocations is nT; if we determine how much memory we may need from the beginning, then At the beginning, such a memory area is allocated. When we need the memory, we can use it directly from the allocated memory. Then the total allocation time required is only T. When n is larger, the time saved is more.

---Cite the source Internet


Three, memory pool design

 

The memory pool design and implementation are mainly divided into the following parts:

  • Overload new
  • Create memory node
  • Create memory pool
  • Manage memory pool

Next, let's talk about the design details in more detail:

Don't talk about overloading new, start directly from the memory node;

Memory pool node

The memory pool node needs to contain the following elements:

  1. Belonging to the pool (pMem), because the subsequent memory pool management can be directly called to apply for memory and release memory
  2. The next node (pNext), here is mainly the idea of ​​using a linked list to associate all memory blocks;
  3. Whether the node is used (bUsed), here to ensure that the node is not used before each use;
  4. Whether it belongs to the memory pool (bBelong), the main reason is that the space maintained by the general memory pool is not particularly large, but when the user applies for a particularly large memory, the normal application process is followed, and it is released normally when it is released;

Memory pool design

The memory pool design is similar to the picture above, mainly including the following elements:

  1. The first memory address (_pBuffer), which is the first block of memory, so that the next memory block will be searched for in the future;
  2. Memory block header (_pHeader), which is the memory pool node mentioned above;
  3. Memory block size (_nSize), that is, how big each node is;
  4. Number of nodes (_nBlock), how many nodes are there in time;

It should be noted here that when applying for a memory block, you need to add the node header, but after the application is returned to the customer, you need to remove it; but when releasing it, you need to move it forward, otherwise an exception will occur;

Free up memory:

When releasing the memory, set the used memory to false, then point to the head, and use the head as the next node. In this way, the node can be found every time it is recycled;

Memory pool management

After the memory pool is created, the corresponding memory pool will be created according to the size and number of nodes;

Memory pool management is mainly to create different memory pools according to different needs to achieve the purpose of management;

There is a main concept here: array mapping

Array mapping is to select different memory pools in different ranges;

Add a piece of code:

 void InitArray(int nBegin,int nEnd, MemoryPool*pMemPool)
 {
  for (int i = nBegin; i <= nEnd; i++)
  {
   _Alloc[i] = pMemPool;
  }
 }

Binding according to scope;


To share more knowledge about the underlying principles of C/C++ Linux back-end development network, click: Learn materials , get a complete technology stack, content knowledge points include Linux, Nginx, ZeroMQ, MySQL, Redis, fastdfs, MongoDB, ZK, streaming media, CDN, P2P, K8S, Docker, TCP/IP, coroutine, DPDK, etc.

 

Fourth, the memory pool implementation

ManagerPool.hpp

#ifndef _MEMORYPOOL_HPP_
#define _MEMORYPOOL_HPP_

#include <iostream>
#include <mutex>

一个内存块的最大内存大小,可以扩展
#define MAX_MEMORY_SIZE 256

class MemoryPool;

//内存块
struct MemoryBlock
{
 MemoryBlock* pNext;//下一块内存块
 bool bUsed;//是否使用
 bool bBelong;//是否属于内存池
 MemoryPool* pMem;//属于哪个池子
};

class MemoryPool
{
public:
 MemoryPool(size_t nSize=128,size_t nBlock=10)
 {
  //相当于申请10块内存,每块内存是1024
  _nSize = nSize;
  _nBlock = nBlock;
  _pHeader = NULL;
  _pBuffer = NULL;
 }
 virtual ~MemoryPool()
 {
  if (_pBuffer != NULL)
  {
   free(_pBuffer);
  }
 }
 //申请内存
 void* AllocMemory(size_t nSize)
 {
  std::lock_guard<std::mutex> lock(_mutex);
  //如果首地址为空,说明没有申请空间
  if (_pBuffer == NULL)
  {
   InitMemory();
  }
  MemoryBlock* pRes = NULL;
  //如果内存池不够用时,需要重新申请内存
  if (_pHeader == NULL)
  {
   pRes = (MemoryBlock*)malloc(nSize+sizeof(MemoryBlock));
   pRes->bBelong = false;
   pRes->bUsed = false;
   pRes->pNext = NULL;
   pRes->pMem = NULL;
  }
  else
  {
   pRes = _pHeader;
   _pHeader = _pHeader->pNext;
   pRes->bUsed = true;
  }
  //返回只返回头后面的信息
  return ((char*)pRes + sizeof(MemoryBlock));
 }

 //释放内存
 void FreeMemory(void* p)
 {
  std::lock_guard<std::mutex> lock(_mutex);
  //和申请内存刚好相反,这里需要包含头,然后全部释放
  MemoryBlock* pBlock = ((MemoryBlock*)p - sizeof(MemoryBlock));
  if (pBlock->bBelong)
  {
   pBlock->bUsed = false;
   //循环链起来
   pBlock->pNext = _pHeader;
   pBlock = _pHeader;
  }
  else
  {
   //不属于内存池直接释放就可以
   free(pBlock);
  }
 }
 //初始化内存块
 void InitMemory()
 {
  if (_pBuffer)
   return;
  //计算每块的大小
  size_t PoolSize = _nSize + sizeof(MemoryBlock);
  //计算需要申请多少内存
  size_t BuffSize = PoolSize * _nBlock;
  _pBuffer = (char*)malloc(BuffSize);
  //初始化头
  _pHeader = (MemoryBlock*)_pBuffer;
  _pHeader->bUsed = false;
  _pHeader->bBelong = true;
  _pHeader->pMem = this;
  //初始化_nBlock块,并且用链表的形式连接
  //保存头指针
  MemoryBlock* tmp1 = _pHeader;
  for (size_t i = 1; i < _nBlock; i++)
  {
   MemoryBlock* tmp2 = (MemoryBlock*)(_pBuffer + i*PoolSize);
   tmp2->bUsed = false;
   tmp2->pNext = NULL;
   tmp2->bBelong = true;
   _pHeader->pMem = this;
   tmp1->pNext = tmp2;
   tmp1 = tmp2;
  }
 }
public:
 //内存首地址(第一块内存的地址)
 char* _pBuffer;
 //内存块头
 MemoryBlock* _pHeader;
 //内存块大小
 size_t _nSize;
 //多少块
 size_t _nBlock;

 std::mutex _mutex;
};

//可以使用模板传递参数
template<size_t nSize,size_t nBlock>
class MemoryPoolor:public MemoryPool
{
public:
 MemoryPoolor()
 {
  _nSize = nSize;
  _nBlock = nBlock;
 }

};

//需要重新对内存池就行管理
class ManagerPool
{
public:
 static ManagerPool& Instance()
 {
  static ManagerPool memPool;
  return memPool;
 }

 void* AllocMemory(size_t nSize)
 {
  if (nSize < MAX_MEMORY_SIZE)
  {
   return _Alloc[nSize]->AllocMemory(nSize);
  }
  else
  {
   MemoryBlock* pRes = (MemoryBlock*)malloc(nSize + sizeof(MemoryBlock));
   pRes->bBelong = false;
   pRes->bUsed = true;
   pRes->pMem = NULL;
   pRes->pNext = NULL;
   return ((char*)pRes + sizeof(MemoryBlock));
  }
 }

 //释放内存
 void FreeMemory(void* p)
 {
  MemoryBlock* pBlock = (MemoryBlock*)((char*)p - sizeof(MemoryBlock));
  //释放内存池
  if (pBlock->bBelong)
  {
   pBlock->pMem->FreeMemory(p);
  }
  else
  {
   free(pBlock);
  }
 }

private:
 ManagerPool()
 {
  InitArray(0,128, &_memory128);
  InitArray(129, 256, &_memory256);
 }

 ~ManagerPool()
 {
 }

 void InitArray(int nBegin,int nEnd, MemoryPool*pMemPool)
 {
  for (int i = nBegin; i <= nEnd; i++)
  {
   _Alloc[i] = pMemPool;
  }
 }
 //可以根据不同内存块进行分配
 MemoryPoolor<128, 1000> _memory128;
 MemoryPoolor<256, 1000> _memory256;
 //映射数组
 MemoryPool* _Alloc[MAX_MEMORY_SIZE + 1];
};
#endif

OperatorMem.hpp

#ifndef _OPERATEMEM_HPP_
#define _OPERATEMEM_HPP_

#include <iostream>
#include <stdlib.h>
#include "MemoryPool.hpp"


void* operator new(size_t nSize)
{
 return ManagerPool::Instance().AllocMemory(nSize);
}

void operator delete(void* p)
{
 return ManagerPool::Instance().FreeMemory(p);
}

void* operator new[](size_t nSize)
{
 return ManagerPool::Instance().AllocMemory(nSize);
}

void operator delete[](void* p)
{
 return ManagerPool::Instance().FreeMemory(p);
}

#endif

mian.cpp

#include "OperateMem.hpp"

using namespace std;

int main()
{
 char* p = new char[128];
 delete[] p;
 return 0;
}

 

Guess you like

Origin blog.csdn.net/Linuxhus/article/details/114697247