LWIP Revisited ---- memory heap management

       Three main LWIP memory management: memory pool Pool, heap, and the C library way. Three ways the C library because it is easy to produce and debris directly from the system heap memory space allocated Therefore, the basic will not use the other two it is the way LWIP default in all, is also a method to achieve overall efficiency and space, then according to the source down to look at the implementation of specific memory management scheme in which some of the techniques used memory pool, so I had confused source insight can not locate declare some variables, but understand they would understand the powerful place LWIP author, Then start with the realization of memory heap.

Heap

Memory heap common implementation is, by applying a large memory space as memory allocation release of total memory. Define an array of LWIP is achieved, and then this memory management, see the code

// actual memory heap data declarations 
#ifndef LWIP_RAM_HEAP_POINTER
LWIP_DECLARE_MEMORY_ALIGNED (ram_heap, MEM_SIZE_ALIGNED + ( 2U * SIZEOF_STRUCT_MEM));
 #define LWIP_RAM_HEAP_POINTER ram_heap
 #endif 
// define middle macro

#ifndef LWIP_DECLARE_MEMORY_ALIGNED
#define LWIP_DECLARE_MEMORY_ALIGNED(variable_name, size) u8_t variable_name[LWIP_MEM_ALIGN_BUFFER(size)]
#endif

Wherein MEM_SIZE_ALIGNE SIZEOF_STRUCT_MEM defined in mem.c file is defined as

/** All allocated blocks will be MIN_SIZE bytes big, at least!
 * MIN_SIZE can be overridden to suit your needs. Smaller values save space,
 * larger values could prevent too small blocks to fragment the RAM too much. */
#ifndef MIN_SIZE
#define MIN_SIZE             12
#endif /* MIN_SIZE */
/* some alignment macros: we define them here for better source code layout */
#define MIN_SIZE_ALIGNED     LWIP_MEM_ALIGN_SIZE(MIN_SIZE)
#define SIZEOF_STRUCT_MEM    LWIP_MEM_ALIGN_SIZE(sizeof(struct mem))
#define MEM_SIZE_ALIGNED     LWIP_MEM_ALIGN_SIZE(MEM_SIZE)

It should mention, commonly used in the LWIP LWIP_MEM_ALIGN_SIZE this macro function, this macro is defined as follows

#ifndef LWIP_MEM_ALIGN_SIZE
#define LWIP_MEM_ALIGN_SIZE(size) (((size) + MEM_ALIGNMENT - 1U) & ~(MEM_ALIGNMENT-1U))
#endif

Wherein MEM_ALIGNMENT alignment byte is defined in the configuration file, where the first plus MEM_ALIGNMENT - 1U, to avoid this operation, the address space is smaller than the alignment size value.

MEM_SIZE_ALIGNED: This macro is configured by the user for the total size of memory heap size of the aligned generally MEM_SIZE_ALIGNED> = MEM_SIZE, MEM_SIZE typically configured by the user.

SIZEOF_STRUCT_MEM: Align size after the size of the footprint of the size of the memory macro control block.

Memory Control Block junction structure is defined:

struct mem {
  /** index (-> ram[next]) of the next struct */
  mem_size_t next;
  /** index (-> ram[prev]) of the previous struct */
  mem_size_t prev;
  /** 1: this area is used; 0: this area is unused */
  u8_t used;
#if MEM_OVERFLOW_CHECK
  /** this keeps track of the user allocation size for guard checks */
  mem_size_t user_size;
#endif
};

next address is not saved, but the memory heap correspond to the index of the array subscript, prev above, I guess this author should be designed to facilitate the release of memory consolidation time. Can look out through the top of the code defines a ram_heap [MEM_SIZE_ALIGNED + (2U * SIZEOF_STRUCT_MEM)] array, wherein two pay more memory block space, occupied at the beginning of a memory array to control block, the other is to take back aligned heap memory addresses discarded part of the increase to ensure the total size of the heap memory is not less than MEM_SIZE user configuration.

Then the memory heap initialization

void
mem_init(void)
{
  struct mem *mem;

  /* align the heap */
  ram = (u8_t *)LWIP_MEM_ALIGN(LWIP_RAM_HEAP_POINTER);
  /* initialize the start of the heap */
  mem = (struct mem *)(void *)ram;
  mem->next = MEM_SIZE_ALIGNED;
  mem->prev = 0;
  mem->used = 0;
  /* initialize the end of the heap */
  ram_end = (struct mem *)(void *)&ram[MEM_SIZE_ALIGNED];
  ram_end->used = 1;
  ram_end->next = MEM_SIZE_ALIGNED;
  ram_end->prev = MEM_SIZE_ALIGNED;

  /* initialize the lowest-free pointer to the start of the heap */
  lfree = (struct mem *)(void *)ram;

  if (sys_mutex_new(&mem_mutex) != ERR_OK) {
    LWIP_ASSERT("failed to create mem_mutex", 0);
  }
}

Initialized memory stack structure diagram below, the entire memory is divided into a heap MEM_SIZE_ALIGNED free block and end block occupied permanently.

When a memory request, using the LWIP first matching principle, written by lfree lowest address free block address, wherein the global variables are recorded ram_end ram and the aligned stack start and end addresses. Memory allocation function is as follows, that is, in general, will apply for legal size and align examination, traversal from lfree designated place, if enough total free space and the size of the memory control block and then starts the processing, the processing here is because LWIP uses a first-fit principle, all possible current memory space to be larger than needed more, such as the first application after initialization block, the entire block will be matched to the size of the heap, and therefore can not use all away , LWIP embodiment uses, if the size of the current block is removed after seizing control block size and the remaining part is not less than the minimum size of the application heap will cut the excess part, returned to the stack, otherwise than a certain space applications a large block of memory, to avoid excessively small memory block. Note the source simplified as follows:

void *
mem_malloc(mem_size_t size)
{
  mem_size_t ptr, ptr2;
  struct mem *mem, *mem2;
  
  if (size == 0) {
    return NULL;
  }
  / * The alignment operation size adjustment value, the multiple alignment * / 
  size = LWIP_MEM_ALIGN_SIZE (size);
   IF (size < MIN_SIZE_ALIGNED) {
     / * is too small to allow a minimum size set * / 
    size = MIN_SIZE_ALIGNED;
  }
  IF (size> MEM_SIZE_ALIGNED) {
     / * exceeds the total size, failure to return empty address * / 
    return NULL;
  }
  / * Mutex * / 
  sys_mutex_lock ( & mem_mutex);
    / * The minimum free memory block address, a free address is calculated starting index value ptr. * / 
    For (PTR = (mem_size_t) ((u8_t *) lfree - RAM); PTR <MEM_SIZE_ALIGNED - size;
         ptr = ((struct mem *)(void *)&ram[ptr])->next) {     
      MEM = ( struct MEM *) ( void *) & RAM [ptr];
       / * get ptr address check ptr address memory block is enough to control the overall size plus the size of the block. Not: ptr = ram [ptr] - > next returns to Step fourth start. * / 
      IF ((MEM-> Used) &&! 
          (MEM -> Next - (+ SIZEOF_STRUCT_MEM PTR))> = size) {
         / * sufficient. After checking the size of this memory block cut the remaining portion is less than the minimum memory block size * / 
        IF (MEM-> Next - (+ SIZEOF_STRUCT_MEM PTR)> = (+ size + SIZEOF_STRUCT_MEM MIN_SIZE_ALIGNED)) {
           / * not, directly the present cutting memory block * / 
          ptr2 = PTR + + SIZEOF_STRUCT_MEM size;
           / * the cut out portion to create a new block * /
          MEM2 = ( struct MEM *) ( void *) & RAM [ptr2];
           / * new block is marked as idle * / 
          MEM2 -> Used = 0 ;
           / * the next block index new block for the next original block block index * / 
          MEM2 -> next = MEM-> next;
           / * previous index block to a new block start index of the original block * /  
          MEM2 -> PREV = PTR;
           / * the index of the original block is updated with the new one index block * / 
          MEM -> Next = ptr2;
           / * marking the original block has been used * / 
          MEM -> used = . 1 ;
         / * Check, the next block is a new block is the end of the block, the next block before a new block if not block, but also to update the original piece for the index of the new block * / 
          IF (mem2-> the Next! = MEM_SIZE_ALIGNED ) {
            ((struct mem *)(void *)&ram[mem2->next])->prev = ptr2;
          }
        } The else {
             / * is the memory block is not present directly cutting * /
        
          mem->used = 1;
        }
        / * Check the assigned memory address to go below the current block is the lowest free address lfree * / 
        IF (MEM == lfree) {
           struct MEM * CUR = lfree;
           / * traversal, the lowest free block address update lfree * / 
          the while (CUR -!> Used && CUR = ram_end) {
            cur = (struct mem *)(void *)&ram[cur->next];
          }
          lfree = cur;
        }
        sys_mutex_unlock ( & mem_mutex);
         / * After the return to the application memory offset, the memory control block is modified to prevent the user * / 
        return (u8_t *) MEM + SIZEOF_STRUCT_MEM;
      }
    }
  sys_mutex_unlock(&mem_mutex);
  return NULL;
}

Wherein also contemplated, protected by atomic memory allocation function when the operating system. Each allocated block of memory, the memory will again list all together, combined facilitate subsequent release.

Memory release

Here can be seen, the memory control blocks stored in the next and prev intention index memory array block rather than the real address, because the physical memory order between such tissue, the corresponding memory block can be on, thus facilitating the release, the release memory function is very simple, you need to pay attention to is the memory allocation functions to return memory address is not actually address the real memory, but the memory block allocated to the backward offset by a memory address control block, preventing the user from inadvertently change memory control block, thereby destroying the memory control block. Memory consolidation process among the most ingenious.

void mem_free(void *rmem)
{
  struct mem *mem;
  
  if (rmem == NULL) {
    return;
  }
  IF ((u8_t *) RMEM <(u8_t *) || RAM (u8_t *) RMEM> = (u8_t * ) ram_end) {
     / * address of overflows, an error is returned * / 
    return ;
  }
  LWIP_MEM_FREE_PROTECT();
  / * The address offset to the start address memory control block * / 
  MEM = ( struct MEM *) ( void *) ((* u8_t) RMEM - SIZEOF_STRUCT_MEM);
   / * marked free memory block * / 
  MEM -> Used = 0 ;
   IF (MEM < lfree) {
     / * memory block address of releasing less than lfree update lfree * / 
    lfree = MEM;
  }
  / * Memory block merging * /
  plug_holes(mem);
  LWIP_MEM_FREE_UNPROTECT();
}
static void plug_holes(struct mem *mem)
{
  struct mem *nmem;
  struct mem *pmem;
  
  / * Find the address of the next block of the current block * / 
  Nmem = ( struct MEM *) ( void *) & RAM [MEM-> Next];
  
  IF (MEM! = Nmem && nmem-> Used == 0 && (* u8_t) Nmem! = (u8_t * ) ram_end) {
     / * the next block is not its own, the next free block, and not the last block * / 
    IF (lfree == Nmem) {
       / * case may need to update lfree, doubt it should be prevented, and graft interrupt subsystem daemon occurs while performing free function * / 
      lfree = MEM;
    }
    / * End of the new index block merge rearwardly release * / 
    MEM -> Next = nmem-> Next;
     / * rear free blocks behind a new free block before the block index update new free block as a starting block index * / 
    (( struct MEM *) ( void *) & RAM [nmem-> Next]) -> PREV = (mem_size_t) ((u8_t *) MEM - RAM);
  }
  / * Take the first block address of the block is freed in the PMEM * / 
  PMEM = ( struct MEM *) ( void *) & RAM [MEM-> PREV];
   / * PMEM not its own address, but also idle, need to be merged * / 
  IF (PMEM! && pmem- MEM => Used == 0 ) {
     / * may need to update lfree, doubtful * / 
    IF (lfree == MEM) {
      lfree = pmem;
    }
    / * The next previous block to update the index, the ending index block behind the new block is freed, three merging block * / 
    PMEM -> next = MEM-> next;
     / * the back block release of the idle block a preceding block the index value of the block start address is updated to index the release of a block before the block * / 
    (( struct MEM *) ( void *) & RAM [MEM-> Next]) -> PREV = (mem_size_t) (( * u8_t) PMEM - RAM);
  }
}

Memory merge function, looks around the block is idle, idle if it is merged into a larger block of memory, because every time the release of memory will be merged, so the situation prior to the release of two blocks are idle block does not exist, so the only treatment Merges adjacent blocks, but I think the look of the merger of the two memory blocks temporarily unable to understand, is to update lfree pointer where I also comment a doubt, look to see if someone pointing ah. Next Revisited achieve memory pool.

 

2019-06-16  13:19:42

 

Guess you like

Origin www.cnblogs.com/w-smile/p/11031286.html