Article directory
In the [Linux Kernel Memory Management] Physical Allocation Page ② ( __alloc_pages_nodemask function parameter analysis | __alloc_pages_nodemask function allocation physical page process) blog, the __alloc_pages_nodemask
function as follows:
First , according to the gfp_t gfp_mask
allocation flag parameter, get the preferred "region type" and "migration type" of "memory node ";
Then , perform the "fast path" , the first allocation attempt uses a low watermark allocation;
If the above "fast path" assignment fails, then a "slow path" assignment is performed;
The above refers to "fast path" and "slow path" 2 22 physical page allocation methods;
In the [Linux Kernel Memory Management] Physical Allocation Page ④ ( __alloc_pages_nodemask function source code analysis | fast path | slow path | get_page_from_freelist source code) blog, it is introduced that the main call of the fast path is defined in the Linux kernel source code linux-4.12\mm\page_alloc. The function c #3017 , allocates physical page memory;get_page_from_freelist
Then [Linux kernel memory management] physical allocation page ⑤ (get_page_from_freelist fast path call function source code analysis | traverse the spare area list | enable cpuset check judgment | determine the number of dirty pages) blog, analyze the source code in the get_page_from_freelist
function ;
1. Check the memory area waterline
In get_page_from_freelist
the fastpath call function , do the following:
Traverse the list of alternate regions
Enable cpuset check verdict
Determine the number of dirty pages
Then, check the memory area watermark , if the memory area "number of free pages - the number of requested memory pages" is less than the area watermark, perform the corresponding operation;
First get the area waterline,
mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
Source code path: linux-4.12\mm\page_alloc.c #3067
Determine whether the memory area "Number of Free Pages - Number of Application Memory Pages" is less than the area watermark , if it is less than, hit the branch and execute the corresponding operation;
if (!zone_watermark_fast(zone, order, mark,
ac_classzone_idx(ac), alloc_flags))
Source code path: linux-4.12\mm\page_alloc.c #3068
2. Determine whether the node recovery is enabled and whether the recovery distance is legal
If the current memory node does not have the node recycling function enabled , or the distance between the current memory node and the preferred node is greater than the "recycle distance",
Then the physical page cannot be allocated from this "memory area" , continue
interrupt this loop, and continue to traverse other memory areas;
if (node_reclaim_mode == 0 ||
!zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
continue;
Source code path: linux-4.12\mm\page_alloc.c #3077
3. Recycle unused pages and check the regional waterline again
Reclaim the requested physical pages from the memory node that are not mapped to the virtual address space of the process ,
Check the memory area watermark again,
If the memory area "free pages - requested memory pages" is less than the area watermark ,
Then the physical page cannot be allocated from this "memory area" , continue
interrupt this loop, and continue to traverse other memory areas;
ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);
switch (ret) {
case NODE_RECLAIM_NOSCAN:
/* did not scan */
continue;
case NODE_RECLAIM_FULL:
/* scanned but unreclaimable */
continue;
default:
/* did we reclaim enough */
if (zone_watermark_ok(zone, order, mark,
ac_classzone_idx(ac), alloc_flags))
goto try_this_zone;
continue;
}
Source code path: linux-4.12\mm\page_alloc.c #3081
Fourth, allocate physical pages
If the above judgments are satisfied,
Then call the rmqueue
function to allocate physical pages from the current memory area,
If the allocation is successful and page
not 0, the if (page)
branch hit, the prep_new_page
function is called, and the physical page is initialized;
try_this_zone:
page = rmqueue(ac->preferred_zoneref->zone, zone, order,
gfp_mask, alloc_flags, ac->migratetype);
if (page) {
prep_new_page(page, order, gfp_mask, alloc_flags);
/*
* If this is a high-order atomic allocation then check
* if the pageblock should be reserved for the future
*/
if (unlikely(order && (alloc_flags & ALLOC_HARDER)))
reserve_highatomic_pageblock(page, zone, order);
return page;
}
}
Source code path: linux-4.12\mm\page_alloc.c #3099
Five, the source code of the processing process involved in this blog
The source code of the processing process involved in this blog:
mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
if (!zone_watermark_fast(zone, order, mark,
ac_classzone_idx(ac), alloc_flags)) {
int ret;
/* Checked here to keep the fast path fast */
BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK);
if (alloc_flags & ALLOC_NO_WATERMARKS)
goto try_this_zone;
if (node_reclaim_mode == 0 ||
!zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
continue;
ret = node_reclaim(zone->zone_pgdat, gfp_mask, order);
switch (ret) {
case NODE_RECLAIM_NOSCAN:
/* did not scan */
continue;
case NODE_RECLAIM_FULL:
/* scanned but unreclaimable */
continue;
default:
/* did we reclaim enough */
if (zone_watermark_ok(zone, order, mark,
ac_classzone_idx(ac), alloc_flags))
goto try_this_zone;
continue;
}
}
Source code path: linux-4.12\mm\page_alloc.c #3067