C cross-platform development library tbox: memory bank uses detailed

TBOX is a using c language cross-platform development library.
For each platform, packaged unified interface, simplifying the development process commonly used in various types of operation, so you in the development process, more attention to the development of practical applications, rather than waste time on trivial interface compatibility above, and take full advantage Some of the unique characteristics of each platform is optimized.
The purpose of this project is to make a simple C development more efficient.

Memory overall architecture

TBOX memory management model, refer to the linux kernel memory management mechanism and made some improvements and optimization on its basis.

Memory pool architecture

Large memory pool: large_pool

The bottom of the entire memory allocation, are based on large memory allocation pool large_pool, similar to the linux-based distribution management page, but different is, and not as large_pool use buddy algorithm like linux conduct (2 ^ N) * page is allocated, so if you need memory 2.1m, 4m need to allocate a block of memory, so the intensity is too large, very wasteful.

Thus using N * page internal large_pool page_size allocated based on the minimum size, so each assigned at most one less waste of space.

And if you need less than a full page memory, the remaining memory will also be returned to the upper, upper layer if needed (such as small_pool), you can take advantage of this extra part of the memory space, so that the memory usage optimized.

And according to the parameters passed in actual demand tb_init, large_pool has two modes:

  1. Direct use system memory allocation interfaces will be allocated chunk of memory, and a double-stranded maintenance, this is relatively simple, not to say.
  2. Unified management on a large piece of contiguous memory, memory allocation to achieve.

Which way the specific use, depending on the application requirements, general applications only need to use one on the line, this time tb_init pass tb_null on the line, if it is embedded applications, the need to manage a limited piece of memory space, this time you can use 2, tb_init incoming specified memory address and size.

Here the main memory look embodiment large_pool structure 2 (assuming that the page size is 4KB):

     --------------------------------------------------------------------------
    |                                     data                                 |
     --------------------------------------------------------------------------
                                         |
     --------------------------------------------------------------------------
    | head | 4KB | 16KB | 8KB | 128KB | ... | 32KB |       ...       |  4KB*N  |
     --------------------------------------------------------------------------

Due to further reduce large_pool mainly used for large blocks allocation, while the ultra-small distribution in the upper small_pool has been split off, so this application, large_pool not too frequent allocation, so the pieces will not be much, fragmented, merge the next adjacent free blocks are free time. In the case of malloc and free blocks of the current allocation is not enough space, it will attempt to merge the next adjacent free block.

Since each memory block, double-stranded maintenance are near useless next to no memory block has a header, just change the process of merging size memory block header field, this merger will not affect efficiency.

Because it did not like the buddy algorithm as double-stranded maintenance free memory, while saving space and maintenance time list, but each order must traverse all allocated memory block to find free memory, so the efficiency is too low, to solve this problem, different levels of internal large_pool blocks were predicted, free or malloc each time, if the current and are adjacent free fast cache to the corresponding level inside the pool to predict, specific graded as follows:

     --------------------------------------
    | >0KB :      4KB       | > 0*page     | 
    |-----------------------|--------------
    | >4KB :      8KB       | > 1*page     | 
    |-----------------------|--------------
    | >8KB :    12-16KB     | > 2*page     | 
    |-----------------------|--------------
    | >16KB :   20-32KB     | > 4*page     | 
    |-----------------------|--------------
    | >32KB :   36-64KB     | > 8*page     | 
    |-----------------------|--------------
    | >64KB :   68-128KB    | > 16*page    | 
    |-----------------------|--------------
    | >128KB :  132-256KB   | > 32*page    | 
    |-----------------------|--------------
    | >256KB :  260-512KB   | > 64*page    | 
    |-----------------------|--------------
    | >512KB :  516-1024KB  | > 128*page   | 
    |-----------------------|--------------
    | >1024KB : 1028-...KB  | > 256*page   | 
     --------------------------------------

Since the memory allocation is generally not too large blocks, so long as it can predict the memory 1m, sufficient, and for> 1m memory, which also adds a single prediction, super chunk allocated to deal with the occasional, and making the overall process more dispensing unity.

If the current level prediction block does not exist, to the next level of predictive block to find, if not find, go back through the entire memory pool.

Under actual test to predict the success of each block are basically above 95%, it said in most cases, allocative efficiency is maintained at O ​​(1) level.

Small memory pool: small_pool

Small memory allocation pool

When each call to malloc upper memory allocation, how much memory needs to go judgment, if the memory is more than or equal to one, will be dispensed directly from large_pool, it is less than one, will be preferentially distributed via small_pool, small_pool carried out for the small memory cache, and optimize the space management and distribution efficiency.

Since in most cases the program, in the use of small memory, so the memory assignment small_pool do a lot of diversion, such that the pressure decreases large_pool, many reducing the amount of fragmentation, and are due to internal small_pool of the fixed_pool fixed-size memory management, is not the presence of external debris. And particle size small memory itself is small, the amount is quite small internal fragmentation.

small_pool in fixed_pool, like the SLUB linux kernel, in a total of 12 small_pool level fixed_pool, each level separately managed memory blocks of a fixed-size, particularly the following levels:

     --------------------------------------
    |    fixed pool: 16B    |  1-16B       | 
    |--------------------------------------|
    |    fixed pool: 32B    |  17-32B      |  
    |--------------------------------------|
    |    fixed pool: 64B    |  33-64B      | 
    |--------------------------------------|
    |    fixed pool: 96B*   |  65-96B*     | 
    |--------------------------------------|
    |    fixed pool: 128B   |  97-128B     |  
    |--------------------------------------|
    |    fixed pool: 192B*  |  129-192B*   |  
    |--------------------------------------|
    |    fixed pool: 256B   |  193-256B    |  
    |--------------------------------------|
    |    fixed pool: 384B*  |  257-384B*   |  
    |--------------------------------------|
    |    fixed pool: 512B   |  385-512B    |  
    |--------------------------------------|
    |    fixed pool: 1024B  |  513-1024B   |  
    |--------------------------------------|
    |    fixed pool: 2048B  |  1025-2048B  |  
    |--------------------------------------|
    |    fixed pool: 3072B* |  2049-3072B* |  
     -------------------------------------- 

Wherein 96B, 192B, 384B, 3072B size is not an integer power of 2 by, mainly to do more efficient use of memory space by small internal fragmentation.

Fixed block memory pool: fixed_pool

As the name implies, it is used to manage the memory allocation fixed_pool fixed size, corresponding to the linux SLUB, and in turn a plurality fixed_pool slot, each slot is responsible for a contiguous memory space, the memory management block management section, similar to the linux the slab, each slot is maintained by double-stranded, and reference linux management system, divided into three slot management:

  1. Currently assigned slot
  2. Part of the free slots list
  3. Completely full list of slots

Specific structure is as follows:

    current:
         --------------
        |              |
     --------------    |
    |     slot     |<--
    |--------------|
    ||||||||||||||||  
    |--------------| 
    |              | 
    |--------------| 
    |              | 
    |--------------| 
    ||||||||||||||||  
    |--------------| 
    |||||||||||||||| 
    |--------------| 
    |              | 
     --------------  

    partial:

     --------------       --------------               --------------
    |     slot     | <=> |     slot     | <=> ... <=> |     slot     |
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     |              |             |              |
    |--------------|     |--------------|             |--------------|
    |              |     ||||||||||||||||             |              |
    |--------------|     |--------------|             |--------------|
    |              |     ||||||||||||||||             ||||||||||||||||
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             |              |
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     |              |             |              |
    |--------------|     |--------------|             |--------------|
    |              |     |              |             ||||||||||||||||
    --------------       --------------               --------------

    full:

     --------------       --------------               --------------
    |     slot     | <=> |     slot     | <=> ... <=> |     slot     |
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             ||||||||||||||||
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             ||||||||||||||||
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             ||||||||||||||||
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             ||||||||||||||||
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             ||||||||||||||||
    |--------------|     |--------------|             |--------------|
    ||||||||||||||||     ||||||||||||||||             ||||||||||||||||
     --------------       --------------               --------------

Specific allocation algorithm

  1. If there is currently idle slot blocks, from the current slot allocation priority
  2. If the current slot is no free blocks, put into the slot of the full list to
  3. Partially free from the slot list, pick an idle slot allocation, and it is set to the current allocation status.

Specific release algorithm

  1. If the slot after the release of a completely free and are not being allocated slot, put the entire slot freed, both to ensure that there is a slot that can be allocated, but also greatly reduces the memory usage, but also to avoid certain situations under frequent release the allocated slot.
  2. If the release of the slot belongs to the full list and become part of the idle, put this part of the free slot slot moved to the list.

Additional to mention that :

When large_pool a space for each allocation of a slot, the remaining space left over portion (<1 * page), can be returned directly to the slot, so that full use of this part of the data slot, which can be separated and cut more memory Piece.

E.g:

Each time a slot 256 fixed_pool growth th memory block 32B comprises (8192B size required to maintain the internal data size + 16B), in fact, when in use large_pool allocated size needs to 8208B, the need to page aligned (4KB), determining the actual distribution taking up 8192+4096: 12288Bspace size.

But large_pool supports all spatial data together return to the top, so to get a slot in fact 12288B amount of memory, and also know its actual size: 12288B, so the actual splitting up (12288-(32B的slot内部维护数据))/32is the 383 block of memory.

Maintenance of multiple memory block 127, the full use of internal fragmentation on large_pool also further increase the memory utilization.

fixed_pool in the slot

Although the analogy with linux in the slab, but its data structure is indeed not the same with slab, as it does not like slab, are used to maintain the list for each idle small, but with a bit segment is idle to maintain the information directly, so save more memory, but also through the optimization algorithm, its distribution efficiency and slab almost the same.

Fixed_pool the head of the slot, there is a small dedicated independent data, for maintaining the idle information for each tile, each block is only temporary with a bit of information determines whether the block is free, since there is no memory block They are fixed size, the bit position location, the index can be obtained by calculation.

And each release and distribution, information bits are cached to the end of a double word size, to predict the time distribution, since the double-word size, a total of 32 bits, so each cache, up to 32 adjacent to predict memory block. Therefore, in most cases, to predict the success rate has been> 98%, allocative efficiency are maintained at O ​​(1), compared to the predicted rate large_pool is also much higher, so small_pool shunt large_pool, and also to some extent, further improve the efficiency of memory allocation.

And even if it is bad luck, did not predict success, the order of slot traversal to find free fast algorithm is quite efficient, highly optimized entirely, the following described in detail below.

Slot assignment order traversal algorithm optimization

Here we mainly use a few built-in functions of gcc:

  1. __builtin_clz: 32-bit integer calculation leading zeros
  2. __builtin_ctz: integer, 32-bit number 0 post
  3. __builtin_clzll: 64-bit integer calculation of the number of leading 0
  4. __builtin_ctzll: counting the number of 64-bit integer of 0 post

In fact, four similar, here we took a good first explain why you want to use __builtin_clz it? In fact, in order to end a 32-bit inside, quickly find the index of an idle position, so you can quickly locate the position of a free block.

Information such as a 32-bit integer bit segment of: X, idle bit index 0 corresponds to the calculation, the main need:__builtin_clz(~x)

It is simple, because the __builtin_clz these built-in functions, gcc compilation for different platforms with highly optimized, calculated pretty fast, if not then how do gcc compiler do?

It does not matter, we can own c achieve a optimized version, of course, can continue to compile optimized to achieve here is to give a c:

    static __tb_inline__ tb_size_t tb_bits_cl0_u32_be_inline(tb_uint32_t x)
    {
        // check
        tb_check_return_val(x, 32);

        // done
        tb_size_t n = 31;
        if (x & 0xffff0000) { n -= 16;  x >>= 16;   }
        if (x & 0xff00)     { n -= 8;   x >>= 8;    }
        if (x & 0xf0)       { n -= 4;   x >>= 4;    }
        if (x & 0xc)        { n -= 2;   x >>= 2;    }
        if (x & 0x2)        { n--;                  }
        return n;
    }

To put it plainly, it is to open each half to reduce the number of judgment, than bit by bit each time an enumeration traversal, which is already quite efficient, not to mention there __builtin_clz it.

Then look at the specific traversal:

  1. Align the start bit segment address bytes by 4/8
  2. Each time the 4/8 bit segment data byte traversal, the traversal using cpu cache size, do targeted loop unrolling, to optimize performance.
  3. By determining! (X + 1) by rapid filtration 0xffffffff already full of these bit-fields further improve traversal efficiency.
  4. If a bit segment is not 0xffffffff, the block index calculated by the actual free __builtin_clz (~ x), and actual allocation.
  5. Finally, if the 32-bit bit segment is not assigned full, it can be cached to make predictions for the next assignment.

String memory pool: string_pool

Talked about this, TBOX memory pool management model, basically be about finished, here simply mention the next string_pool, namely: String Pool

For string_pool mainly for the upper application, the use of small string for certain frequently, and a high rate of repetition module, can be optimized by string_pool, further reducing the memory usage, by reference count + string_pool internal hash table maintenance for only save a copy of the same string.

For example, cookies may be used to maintain string, the string maintained in http header portion and the like. .

Switching global memory allocator

The default memory allocation tbox is based entirely on their own memory pool architecture that supports fast memory allocation, and optimization of debris, and supports a variety of memory leak, overflow detection.

If you do not want to tbox built-in default memory allocation management, but also the flexibility to switch to another distribution model, because tbox now fully supports allocator architecture,
simply pass the different stages in the init distributor model, you can quickly switch assignment mode, for example:

The default memory allocator

tbox default initialization using the default tbox memory management, he will be enabled by default memory pool maintenance, debris optimization, memory leak detection, spill all the features.

tb_init(tb_null, tb_null);

Initialization The above is equivalent to:

tb_init(tb_null, tb_default_allocator(tb_null, 0));

The default allocator will usually direct calls to malloc system using the system native native memory, but on the basis of more than a memory management and memory test support layer, if you want to fully hosted on a piece of contiguous memory, you can use the following way :

tb_init(tb_null, tb_default_allocator((tb_byte_t*)malloc(300 * 1024 * 1024), 300 * 1024 * 1024));

Static memory allocator

We can also directly used for maintenance on a uniform static buffer, all features enabled memory leak detection, overflow, with this difference tb_default_allocator is that
the distributor relatively lightweight, simple internal data structure, small footprint, suitable for low resource environment, such as in some embedded environments, with the distributor be higher resource utilization.

!> But the allocator does not support fragmentation optimization, prone to debris.

tb_init(tb_null, tb_static_allocator((tb_byte_t*)malloc(300 * 1024 * 1024), 300 * 1024 * 1024));

Native memory allocator

Native full use of the system memory allocation, without any internal data handling and maintenance, all dependent on characteristics of the system environment, the memory and the memory pool detection characteristics are not supported, corresponds directly passed through distribution system interfaces like malloc.

Users can according to their needs, if you do not want to use the built-in memory tbox pool maintenance, you can use this dispenser.

tb_init(tb_null, tb_native_allocator());

Virtual memory allocator

After the v1.6.4 version, tbox provide a new type of distributor: Virtual memory allocator, mainly used to allocate some super large blocks of memory.

Typically, users do not need it, because the internal tbox default memory allocator will automatically super chunk of memory blocks, to switch to the virtual memory pool allocation, but if you want to force the switch to the virtual memory allocation, or by following way to switch between:

tb_init(tb_null, tb_virtual_allocator());

Custom memory allocator

If you feel that these dispensers is not enough, you can customize your own memory allocator, let tbox to use, customized way is also very simple, here take the tb_native_allocatorimplementation code as an example:

static tb_pointer_t tb_native_allocator_malloc(tb_allocator_ref_t allocator, tb_size_t size __tb_debug_decl__)
{
    // trace
    tb_trace_d("malloc(%lu) at %s(): %lu, %s", size, func_, line_, file_);

    // malloc it
    return malloc(size);
}
static tb_pointer_t tb_native_allocator_ralloc(tb_allocator_ref_t allocator, tb_pointer_t data, tb_size_t size __tb_debug_decl__)
{
    // trace
    tb_trace_d("realloc(%p, %lu) at %s(): %lu, %s", data, size, func_, line_, file_);

    // realloc it
    return realloc(data, size);
}
static tb_bool_t tb_native_allocator_free(tb_allocator_ref_t allocator, tb_pointer_t data __tb_debug_decl__)
{
    // trace    
    tb_trace_d("free(%p) at %s(): %lu, %s", data, func_, line_, file_);

    // free it
    return free(data);
}

Then we initialize our own distributor under native implementation:

tb_allocator_t myallocator    = {0};
myallocator.type              = TB_ALLOCATOR_NATIVE;
myallocator.malloc            = tb_native_allocator_malloc;
myallocator.ralloc            = tb_native_allocator_ralloc;
myallocator.free              = tb_native_allocator_free;

Is not very simple, it is noted that the above __tb_debug_decl__macros which declare some debug information, such as _file, _func, _lineinformation such as memory allocation when recording,
you can print when debug out, do debugging, you can also use this information to themselves to deal with some advanced memory test operation, but those in the release, can not be acquired

So when dealing with the need to use __tb_debug__a macro to be treated separately. .

The myallocator passed tb_initan interface, after tb_malloc/tb_ralloc/tb_free/...all other tbox memory allocation interfaces will be cut to a new allocator allocation. .

tb_init(tb_null, &myallocator);

Of course, if you want to be dispensed directly from a specific allocator, you can also call directly assigned allocator interface to achieve:

tb_allocator_malloc(&myallocator, 10);
tb_allocator_ralloc(&myallocator, data, 100);
tb_allocator_free(&myallocator, data);

Memory Allocation Interface

Data Distribution Interface

Such interfaces can be directly assigned memory data, but returns tb_pointer_tthe type of data, the user needs to do it yourself type strong turn to access.

!> Wherein the interface malloc0 with this suffix 0 words, the allocated memory is automatically cleared to do memory operations.

tb_free(data)                               
tb_malloc(size)                             
tb_malloc0(size)                            
tb_nalloc(item, size)                       
tb_nalloc0(item, size)                      
tb_ralloc(data, size)                       

String assignment Interface

tbox also provides a convenient distribution of string type, the data type of the operation is direct tb_char_t*, without the additional strong transfer process.

tb_malloc_cstr(size)                        
tb_malloc0_cstr(size)                       
tb_nalloc_cstr(item, size)                  
tb_nalloc0_cstr(item, size)                 
tb_ralloc_cstr(data, size)                  

Byte dispense interface

This is the data distribution interfaces, the only difference is that by default do tb_byte_t*the type of strong transfer processing, accessing data read and write access.

tb_malloc_bytes(size)                       
tb_malloc0_bytes(size)                      
tb_nalloc_bytes(item, size)                 
tb_nalloc0_bytes(item, size)                
tb_ralloc_bytes(data, size)                 

struct dispense interface configuration data

If you want to allocate some struct data, such interface comes with a strong turn struct type process.

tb_malloc_type(type)                        
tb_malloc0_type(type)                       
tb_nalloc_type(item, type)                  
tb_nalloc0_type(item, type)                 
tb_ralloc_type(data, item, type)      

Used as follows:

typedef struct __xxx_t
{
    tb_int_t dummy;

}xxx_t;

xxx_t* data = tb_malloc0_type(xxx_t);
if (data)
{
    data->dummy = 0;
    tb_free(data);
}

We can see, we are eliminating the need for the type of conversion process, so this is to provide some convenience auxiliary interface.

Address alignment data distribution Interface

Data memory address allocation out if we sometimes required, must be aligned in accordance with the specified size over, you can use these interfaces:

tb_align_free(data)                         
tb_align_malloc(size, align)                
tb_align_malloc0(size, align)               
tb_align_nalloc(item, size, align)          
tb_align_nalloc0(item, size, align)         
tb_align_ralloc(data, size, align) 

E.g:

tb_pointer_t data = tb_align_malloc(1234, 16);

data address data is actually allocated out 16 byte aligned.

If it is 8-byte aligned memory data distribution, can also be dispensed through the following interface, such interface of the system is optimized 64bits, it does not do anything special treatment:

#if TB_CPU_BIT64
#   define tb_align8_free(data)                     tb_free((tb_pointer_t)data)
#   define tb_align8_malloc(size)                   tb_malloc(size)
#   define tb_align8_malloc0(size)                  tb_malloc0(size)
#   define tb_align8_nalloc(item, size)             tb_nalloc(item, size)
#   define tb_align8_nalloc0(item, size)            tb_nalloc0(item, size)
#   define tb_align8_ralloc(data, size)             tb_ralloc((tb_pointer_t)data, size)
#else
#   define tb_align8_free(data)                     tb_align_free((tb_pointer_t)data)
#   define tb_align8_malloc(size)                   tb_align_malloc(size, 8)
#   define tb_align8_malloc0(size)                  tb_align_malloc0(size, 8)
#   define tb_align8_nalloc(item, size)             tb_align_nalloc(item, size, 8)
#   define tb_align8_nalloc0(item, size)            tb_align_nalloc0(item, size, 8)
#   define tb_align8_ralloc(data, size)             tb_align_ralloc((tb_pointer_t)data, size, 8)
#endif

Memory test

TBOX memory allocation in debug mode, can detect memory leaks and cross-border support, but also to pinpoint the specific piece of the problem of memory allocation position, and function call stack.

To use memory tbox detection, only need to switch to debug mode compilation:

$ xmake f -m debug
$ xmake

Memory leak detection

!> Leak detection, you must exit the program complete, be sure to call tb_exit()in order to trigger after detection of the interface.

Memory leak detection must be in the moments before the program exits, calling tb_exit () when will be implemented, if there are leaks, there will be detailed to the output terminal.

    tb_void_t tb_demo_leak()
    {
        tb_pointer_t data = tb_malloc0(10);
    }

Output:

    [tbox]: [error]: leak: 0x7f9d5b058908 at tb_static_fixed_pool_dump(): 735, memory/impl/static_fixed_pool.c
    [tbox]: [error]: data: from: tb_demo_leak(): 43, memory/check.c
    [tbox]: [error]:     [0x000001050e742a]: 0   demo.b                              0x00000001050e742a tb_fixed_pool_malloc0_ + 186
    [tbox]: [error]:     [0x000001050f972b]: 1   demo.b                              0x00000001050f972b tb_small_pool_malloc0_ + 507
    [tbox]: [error]:     [0x000001050f593c]: 2   demo.b                              0x00000001050f593c tb_pool_malloc0_ + 540
    [tbox]: [error]:     [0x00000105063cd7]: 3   demo.b                              0x0000000105063cd7 tb_demo_leak + 55
    [tbox]: [error]:     [0x00000105063e44]: 4   demo.b                              0x0000000105063e44 tb_demo_memory_check_main + 20
    [tbox]: [error]:     [0x0000010505b08e]: 5   demo.b                              0x000000010505b08e main + 878
    [tbox]: [error]:     [0x007fff8c95a5fd]: 6   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]: [error]:     [0x00000000000002]: 7   ???                                 0x0000000000000002 0x0 + 2
    [tbox]: [error]: data: 0x7f9d5b058908, size: 10, patch: cc

Memory detect cross-border

Cross-border spill detection is done in real time, but also made to libc instrumentation, the use of common strcpy, memset, etc., have to go back detection

    tb_void_t tb_demo_overflow()
    {
        tb_pointer_t data = tb_malloc0(10);
        if (data)
        {
            tb_memset(data, 0, 11);
            tb_free(data);
        }
    }

Output:

    [tbox]: [memset]: [overflow]: [0x0 x 11] => [0x7f950b044508, 10]
    [tbox]: [memset]: [overflow]: [0x0000010991a1c7]: 0   demo.b                              0x000000010991a1c7 tb_memset + 151
    [tbox]: [memset]: [overflow]: [0x000001098a2d01]: 1   demo.b                              0x00000001098a2d01 tb_demo_overflow + 97
    [tbox]: [memset]: [overflow]: [0x000001098a3044]: 2   demo.b                              0x00000001098a3044 tb_demo_memory_check_main + 20
    [tbox]: [memset]: [overflow]: [0x0000010989a28e]: 3   demo.b                              0x000000010989a28e main + 878
    [tbox]: [memset]: [overflow]: [0x007fff8c95a5fd]: 4   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]: [memset]: [overflow]: [0x00000000000002]: 5   ???                                 0x0000000000000002 0x0 + 2
    [tbox]:     [malloc]: [from]: data: from: tb_demo_overflow(): 12, memory/check.c
    [tbox]:     [malloc]: [from]:     [0x0000010992662a]: 0   demo.b                              0x000000010992662a tb_fixed_pool_malloc0_ + 186
    [tbox]:     [malloc]: [from]:     [0x0000010993892b]: 1   demo.b                              0x000000010993892b tb_small_pool_malloc0_ + 507
    [tbox]:     [malloc]: [from]:     [0x00000109934b3c]: 2   demo.b                              0x0000000109934b3c tb_pool_malloc0_ + 540
    [tbox]:     [malloc]: [from]:     [0x000001098a2cd7]: 3   demo.b                              0x00000001098a2cd7 tb_demo_overflow + 55
    [tbox]:     [malloc]: [from]:     [0x000001098a3044]: 4   demo.b                              0x00000001098a3044 tb_demo_memory_check_main + 20
    [tbox]:     [malloc]: [from]:     [0x0000010989a28e]: 5   demo.b                              0x000000010989a28e main + 878
    [tbox]:     [malloc]: [from]:     [0x007fff8c95a5fd]: 6   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]:     [malloc]: [from]:     [0x00000000000002]: 7   ???                                 0x0000000000000002 0x0 + 2
    [tbox]:     [malloc]: [from]: data: 0x7f950b044508, size: 10, patch: cc
    [tbox]:     [malloc]: [from]: data: first 10-bytes:
    [tbox]: ===================================================================================================================================================
    [tbox]: 00000000   00 00 00 00  00 00 00 00  00 00                                                                         ..........
    [tbox]: [error]: abort at tb_memset(): 255, libc/string/memset.c

Memory detecting overlapping coverage

If two memory copy overlap occurs, there may be partially overwritten data, resulting in bug, so TBOX have also made some detection.

    tb_void_t tb_demo_overlap()
    {
        tb_pointer_t data = tb_malloc(10);
        if (data)
        {
            tb_memcpy(data, (tb_byte_t const*)data + 1, 5);
            tb_free(data);
        }
    }

Export

    [tbox]: [memcpy]: [overlap]: [0x7fe9b5042509, 5] => [0x7fe9b5042508, 5]
    [tbox]: [memcpy]: [overlap]: [0x000001094403b8]: 0   demo.b                              0x00000001094403b8 tb_memcpy + 632
    [tbox]: [memcpy]: [overlap]: [0x000001093c99f9]: 1   demo.b                              0x00000001093c99f9 tb_demo_overlap + 105
    [tbox]: [memcpy]: [overlap]: [0x000001093c9a44]: 2   demo.b                              0x00000001093c9a44 tb_demo_memory_check_main + 20
    [tbox]: [memcpy]: [overlap]: [0x000001093c0c8e]: 3   demo.b                              0x00000001093c0c8e main + 878
    [tbox]: [memcpy]: [overlap]: [0x007fff8c95a5fd]: 4   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]: [memcpy]: [overlap]: [0x00000000000002]: 5   ???                                 0x0000000000000002 0x0 + 2
    [tbox]:     [malloc]: [from]: data: from: tb_demo_overlap(): 58, memory/check.c
    [tbox]:     [malloc]: [from]:     [0x0000010945eadb]: 0   demo.b                              0x000000010945eadb tb_small_pool_malloc_ + 507
    [tbox]:     [malloc]: [from]:     [0x0000010945b23c]: 1   demo.b                              0x000000010945b23c tb_pool_malloc_ + 540
    [tbox]:     [malloc]: [from]:     [0x000001093c99c7]: 2   demo.b                              0x00000001093c99c7 tb_demo_overlap + 55
    [tbox]:     [malloc]: [from]:     [0x000001093c9a44]: 3   demo.b                              0x00000001093c9a44 tb_demo_memory_check_main + 20
    [tbox]:     [malloc]: [from]:     [0x000001093c0c8e]: 4   demo.b                              0x00000001093c0c8e main + 878
    [tbox]:     [malloc]: [from]:     [0x007fff8c95a5fd]: 5   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]:     [malloc]: [from]:     [0x00000000000002]: 6   ???                                 0x0000000000000002 0x0 + 2
    [tbox]:     [malloc]: [from]: data: 0x7fe9b5042508, size: 10, patch: cc
    [tbox]:     [malloc]: [from]: data: first 10-bytes:
    [tbox]: ===================================================================================================================================================
    [tbox]: 00000000   CC CC CC CC  CC CC CC CC  CC CC                                                                         ..........
    [tbox]: [error]: abort at tb_memcpy(): 125, libc/string/memcpy.c

Memory dual release detection

    tb_void_t tb_demo_free2()
    {
        tb_pointer_t data = tb_malloc0(10);
        if (data)
        {
            tb_free(data);
            tb_free(data);
        }
    }

Export

    [tbox]: [assert]: expr[((impl->used_info)[(index) >> 3] & (0x1 << ((index) & 7)))]: double free data: 0x7fd93386c708 at tb_static_fixed_pool_free(): 612, memory/impl/static_fixed_pool.c
    [tbox]:     [0x0000010c9f553c]: 0   demo.b                              0x000000010c9f553c tb_static_fixed_pool_free + 972
    [tbox]:     [0x0000010c9ee7a9]: 1   demo.b                              0x000000010c9ee7a9 tb_fixed_pool_free_ + 713
    [tbox]:     [0x0000010ca01ff5]: 2   demo.b                              0x000000010ca01ff5 tb_small_pool_free_ + 885
    [tbox]:     [0x0000010c9fdb4f]: 3   demo.b                              0x000000010c9fdb4f tb_pool_free_ + 751
    [tbox]:     [0x0000010c96ac8e]: 4   demo.b                              0x000000010c96ac8e tb_demo_free2 + 158
    [tbox]:     [0x0000010c96ae44]: 5   demo.b                              0x000000010c96ae44 tb_demo_memory_check_main + 20
    [tbox]:     [0x0000010c96208e]: 6   demo.b                              0x000000010c96208e main + 878
    [tbox]:     [0x007fff8c95a5fd]: 7   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]:     [0x00000000000002]: 8   ???                                 0x0000000000000002 0x0 + 2
    [tbox]: [error]: free(0x7fd93386c708) failed! at tb_demo_free2(): 37, memory/check.c at tb_static_fixed_pool_free(): 649, memory/impl/static_fixed_pool.c
    [tbox]: [error]: data: from: tb_demo_free2(): 33, memory/check.c
    [tbox]: [error]:     [0x0000010c9ee42a]: 0   demo.b                              0x000000010c9ee42a tb_fixed_pool_malloc0_ + 186
    [tbox]: [error]:     [0x0000010ca0072b]: 1   demo.b                              0x000000010ca0072b tb_small_pool_malloc0_ + 507
    [tbox]: [error]:     [0x0000010c9fc93c]: 2   demo.b                              0x000000010c9fc93c tb_pool_malloc0_ + 540
    [tbox]: [error]:     [0x0000010c96ac27]: 3   demo.b                              0x000000010c96ac27 tb_demo_free2 + 55
    [tbox]: [error]:     [0x0000010c96ae44]: 4   demo.b                              0x000000010c96ae44 tb_demo_memory_check_main + 20
    [tbox]: [error]:     [0x0000010c96208e]: 5   demo.b                              0x000000010c96208e main + 878
    [tbox]: [error]:     [0x007fff8c95a5fd]: 6   libdyld.dylib                       0x00007fff8c95a5fd start + 1
    [tbox]: [error]:     [0x00000000000002]: 7   ???                                 0x0000000000000002 0x0 + 2
    [tbox]: [error]: data: 0x7fd93386c708, size: 10, patch: cc
    [tbox]: [error]: data: first 10-bytes:
    [tbox]: ===================================================================================================================================================
    [tbox]: 00000000   00 00 00 00  00 00 00 00  00 00                                                                         ..........
    [tbox]: [error]: abort at tb_static_fixed_pool_free(): 655, memory/impl/static_fixed_pool.c

Guess you like

Origin www.cnblogs.com/tboox/p/11994427.html