Analysis of memory situation under Linux

I. Introduction

Memory is very important to system resources. Memory problems can lead to increased system latency, system memory leaks, process kills, and many other serious problems. Therefore, it is necessary to analyze the memory usage of processes. This article focuses on the analysis of the dynamic application of memory in the program.

Note that all tests are tested under the 5.13.0-52 kernel condition, different kernel test environments, memory classification may be very different.

Two program memory structure

The default virtual memory layout in a linux 32-bit system is as follows:13d8c0d1e6967f12457d7a2d04c4fbb8.png

illustrate:

  1. In Linux, each process has its own virtual memory space. The size of the space and the number of bits of the CPU determine the upper limit of the virtual space. For example, in a 32-bit system, the upper limit of the memory space that the hardware can access is 4GB. This 4GB space It's not fully usable for apps either.

  2. The entire memory space is divided into two parts, the operating system occupies a part, from address 0xC0000000 to 0xFFFFFFFF, which is 1GB. The remaining 3GB space from 0x00000000 to 0xBFFFFFFF has a total of 3GB space;

  3. The ELF executable file divides the entire virtual memory space into multiple segments;

  4. The operating system manages the virtual memory space of the process through the VMA.

VMA is the abbreviation of virtual Memory Area. A simple program VMA is shown as follows:

root@ubuntu-lab:/sys/kernel/debug/tracing# cat /proc/9776/maps
5655a000-5655b000 r--p 00000000 fd:00 1193979                            /home/miao/c-test/mm-test/a.out
5655b000-5655c000 r-xp 00001000 fd:00 1193979                            /home/miao/c-test/mm-test/a.out
5655c000-5655d000 r--p 00002000 fd:00 1193979                            /home/miao/c-test/mm-test/a.out
5655d000-5655e000 r--p 00002000 fd:00 1193979                            /home/miao/c-test/mm-test/a.out
5655e000-5655f000 rw-p 00003000 fd:00 1193979                            /home/miao/c-test/mm-test/a.out
5746c000-5748e000 rw-p 00000000 00:00 0                                  [heap]
f7d83000-f7da3000 r--p 00000000 fd:00 546008                             /usr/lib32/libc.so.6
f7da3000-f7f1f000 r-xp 00020000 fd:00 546008                             /usr/lib32/libc.so.6
f7f1f000-f7fa4000 r--p 0019c000 fd:00 546008                             /usr/lib32/libc.so.6
f7fa4000-f7fa5000 ---p 00221000 fd:00 546008                             /usr/lib32/libc.so.6
f7fa5000-f7fa7000 r--p 00221000 fd:00 546008                             /usr/lib32/libc.so.6
f7fa7000-f7fa8000 rw-p 00223000 fd:00 546008                             /usr/lib32/libc.so.6
f7fa8000-f7fb2000 rw-p 00000000 00:00 0 
f7fbc000-f7fbe000 rw-p 00000000 00:00 0 
f7fbe000-f7fc2000 r--p 00000000 00:00 0                                  [vvar]
f7fc2000-f7fc4000 r-xp 00000000 00:00 0                                  [vdso]
f7fc4000-f7fc5000 r--p 00000000 fd:00 546004                             /usr/lib32/ld-linux.so.2
f7fc5000-f7fe8000 r-xp 00001000 fd:00 546004                             /usr/lib32/ld-linux.so.2
f7fe8000-f7ff5000 r--p 00024000 fd:00 546004                             /usr/lib32/ld-linux.so.2
f7ff6000-f7ff8000 r--p 00031000 fd:00 546004                             /usr/lib32/ld-linux.so.2
f7ff8000-f7ff9000 rw-p 00033000 fd:00 546004                             /usr/lib32/ld-linux.so.2
ffe18000-ffe39000 rw-p 00000000 00:00 0                                  [stack]

illustrate:

  1. The address range of the first column VMA; (virtual address)

  2. The permission of VMA in the second column, r means readable, w means writable, x means executable, p means private, s means shared;

  3. The offset in the third column indicates the offset of the Segment corresponding to the VMA in the image file;

  4. The fourth column generally indicates the major device number of the device where the image file is located: the minor device number, where the major device number is mostly displayed as fd, is it a reason? For non-file mapped memory, such as heap and stack, these two bits are displayed as 00:00

  5. The fifth column identifies the node number of the mapping file;

  6. The sixth column identifies the specific file of the mapping. You can see that in addition to the program file, there is also file information of the library used. vdso is a special VMA used to interact with the kernel.

The code used is as follows:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>


int g_int = 123;

static g_static_int2 = 456;
static g_static_int_not_init;


int main(void)
{
  int l_int = 3;
  int l_int2 = 4;

  static l_static_int =6;
  static l_static_int2;


  int * pint = (int*)malloc(sizeof(int));
  *pint = 12;

  printf("g_int:%d,\tg_static_int2:%d \tg_static_int_not_init:%d \n",g_int,g_static_int2,g_static_int_not_init);
  printf("g_int:%p,\tg_static_int2:%p \tg_static_int_not_init:%p \n",&g_int,&g_static_int2,&g_static_int_not_init);
  
  printf("l_int:%d \tl_int2:%d \tl_static_int:%d,\tl_static_int2:%d,\tpint:%d\n",l_int,l_int2,l_static_int,l_static_int2,*pint);
  printf("l_int:%p \tl_int2:%p \tl_static_int:%p,\tl_static_int2:%p,\tpint:%p\n",&l_int,&l_int2,&l_static_int,&l_static_int2,pint);
  while(1) {
    sleep(3);
    printf("PID:%d\n",getpid());
  }
  free(pint);
  return 0;
}

After running multiple times, we can find that the address of our variable is in its corresponding space, and at the same time, it is found that the addresses of the heap and stack are different each time. This is also a random offset set for safety.

By the way, the stack is the stack of the main thread. The maximum size is ulimit -s by default, which is generally 8MB. The size of the stack created by pthread_create is generally 2MB. Different architectures are different and related to the size of the ulimit setting, and can also be changed by yourself.

Three calculation program memory size

As for the size of program memory usage, the simpler method is to directly see the following with top -p pid:

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                            
  12375 miao      20   0    2764    852    788 S   0.0   0.0   0:00.00 a.out

VIRT is the size of virtual memory, and RES is the size of actual memory. These two are generally the most important, and they are enough. If you calculate and sum according to each VMA, you can calculate it through the /proc/pid/smaps file. This file is more detailed than the maps file, and it is worth analyzing carefully. Calculate and verify:

cat /proc/12375/smaps|grep Size|grep -v Page|awk -F: '{print $2}'|awk '{sum += $1}; END {print sum}'
2764

On the virtual memory pair, the actual memory calculation:

# cat /proc/12375/smaps|grep Rss|grep -v Page|awk -F: '{print $2}'|awk '{sum += $1}; END {print sum}'
1388

This does not match the actual memory. According to Pss (that is, after the shared memory is divided equally, there is still a gap), the reason for the gap is that the application memory is applied through the library of c, and more applications will be applied when the library is applied. Some alignment or something, some differences might be normal too.

There is also a simpler way to calculate program memory:

oot@ubuntu-lab:/home/miao/c-test/mm-test# cat /proc/24546/status
Name:   a.out
Umask:  0002
State:  S (sleeping)
Tgid:   24546
Ngid:   0
Pid:    24546
PPid:   5359
TracerPid:      0
Uid:    1000    1000    1000    1000
Gid:    1000    1000    1000    1000
FDSize: 256
Groups: 4 24 27 30 46 110 1000 
NStgid: 24546
NSpid:  24546
NSpgid: 24546
NSsid:  5359
VmPeak:  1051332 kB
VmSize:  1051332 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:   1049776 kB
VmRSS:   1049776 kB
RssAnon:         1048672 kB
RssFile:            1104 kB
RssShmem:              0 kB
....

VmRSS This is the memory occupied by the program. Generally, VmRSS = RssAnon+RssFile+RssShmem

Four system memory analysis

In fact, we have encountered system performance problems. If we suspect that it is a memory problem, then it is very likely to use the free command to check, then the top command to check, sort the memory usage of the program under top, find the suspicious process, and then do the above Analysis of the memory usage of the process. (The memory occupied by the entire system includes two parts: the memory occupied by the kernel and the memory occupied by the application program).

In fact, the two kernel export files you should look at most: /proc/meminfo and /proc/vmstat The former is the classification of memory usage, while the latter is the dynamic changes of more detailed memory data such as memory allocation, regularization, and dirty page writeback. The problem is found through these changes. Since the latter is not the focus, let’s focus on the former. The statistics on my machine are as follows:

miao@ubuntu-lab:~$ cat /proc/meminfo 
MemTotal:        4926744 kB    //所有可用的内存大小,物理内存减去预留位和内核使用。系统从加电开始到引导完成,firmware/BIOS要预留一些内存,内核本身要占用一些内存,最后剩下可供内核支配的内存就是MemTotal。这个值在系统运行期间一般是固定不变的,重启会改变。
MemFree:         3663620 kB  //表示系统尚未使用的内存。
MemAvailable:    4209668 kB //真正的系统可用内存,系统中有些内存虽然已被使用但是可以回收的,比如cache/buffer、slab都有一部分可以回收,所以这部分可回收的内存加上MemFree才是系统可用的内存
Buffers:           78416 kB   //用来给块设备做缓存的内存,(文件系统的 metadata、pages)
Cached:           661976 kB     //分配给文件缓冲区的内存,例如vi一个文件,就会将未保存的内容写到该缓冲区
SwapCached:            0 kB   //被swap到磁盘上的匿名内存,又一次被拉入内存统计
Active:           325864 kB    //经常使用的高速缓冲存储器页面文件大小
Inactive:         618264 kB   //不经常使用的高速缓冲存储器文件大小
Active(anon):       4564 kB   //活跃的匿名内存
Inactive(anon):   215464 kB   //不活跃的匿名内存
Active(file):     321300 kB    //活跃的文件使用内存
Inactive(file):   402800 kB  //不活跃的文件使用内存
Unevictable:       19372 kB    //不能被释放的内存页
Mlocked:           19372 kB   //系统调用 mlock 家族允许程序在物理内存上锁住它的部分或全部地址空间。这将阻止Linux 将这个内存页调度到交换空间(swap space),即使该程序已有一段时间没有访问这段空间
SwapTotal:       4194300 kB //交换空间总内存
SwapFree:        4194300 kB    //交换空间空闲内存
Dirty:               148 kB              //等待被写回到磁盘的脏内存
Writeback:             0 kB            //正在被写回的脏内存
AnonPages:        223144 kB    //未映射页的内存/映射到用户空间的非文件页表大小
Mapped:           210380 kB      //映射文件内存
Shmem:             13168 kB      //已经被分配的共享内存,所有tmpfs类型的文件系统占用的空间都计入共享内存
KReclaimable:      60332 kB 
Slab:             137076 kB         //内核数据结构缓存
SReclaimable:      60332 kB   //可收回slab内存
SUnreclaim:        76744 kB    //不可收回slab内存
KernelStack:        7568 kB    // 每一个用户线程都会分配一个kernel stack(内核栈),内核栈虽然属于线程,但用户态的代码不能访问,只有通过系统调用(syscall)、自陷(trap)或异常(exception)进入内核态的时候才会用到,也就是说内核栈是给kernel code使用的。在x86系统上Linux的内核栈大小是固定的8K或16K
PageTables:         5876 kB    //管理内存分页的索引表(物理内存和虚拟内存映射表)的大小 
NFS_Unstable:          0 kB // The amount, in kibibytes, of NFS pages sent to the server but not yet committed to the stable storage.
Bounce:                0 kB // 有些老设备只能访问低端内存,比如16M以下的内存,当应用程序发出一个I/O 请求,DMA的目的地址却是高端内存时(比如在16M以上),内核将在低端内存中分配一个临时buffer作为跳转,把位于高端内存的缓存数据复制到此处。这种额外的数据拷贝被称为“bounce buffering”,会降低I/O 性能。大量分配的bounce buffers 也会占用额外的内存。
WritebackTmp:          0 kB    // USE用于临时写回缓冲区的内存
CommitLimit:     6657672 kB       // 系统实际可分配内存总量
Committed_AS:    1742228 kB   // 当前已分配的内存总量
VmallocTotal:   34359738367 kB // 虚拟内存空间能分配的总内存大小
VmallocUsed:       57524 kB       // 虚拟内存空间使用的内存大小
VmallocChunk:          0 kB          // 虚拟内存空间可分配的最大的逻辑连续的内存大小
Percpu:            89600 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB //AnonHugePages统计的是Transparent HugePages (THP),THP与Hugepages不是一回事,与进程的RSS/PSS是有重叠的,如果用户进程用到了THP,进程的RSS/PSS也会相应增加
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:       0   //超级大页总大小如果进程使用了Hugepages,它的RSS/PSS不会增加。
HugePages_Free:        0  //超级大页空闲大小
HugePages_Rsvd:        0 // 超级大页剩余内存
HugePages_Surp:        0 // 剩余超级大页数量
Hugepagesize:       2048 kB  //超级大页 尺寸为2MB
Hugetlb:               0 kB
DirectMap4k:      198464 kB //DirectMap所统计的不是关于内存的使用,而是一个反映TLB效率的指标 表示映射为4kB的内存数量 TLB(Translation Lookaside Buffer)是位于CPU上的缓存,用于将内存的虚拟地址翻译成物理地址,由于TLB的大小有限,不能缓存的地址就需要访问内存里的page table来进行翻译,速度慢很多。
DirectMap2M:     3913728 kB // 表示映射为2MB的内存数量
DirectMap1G:     1048576 kB // 表示映射为1GB的内存数量

Five application memory analysis

The system memory analysis just now can analyze the size of various types of memory. It also needs to correspond to the application program according to the size of different types of memory. It is caused by which type of memory is applied for in the application program. Therefore, this chapter applies for memory through the test program. Look at the size changes of various memory categories in meminfo, so that when you encounter a problem, you can use the data in meminfo to guess where the data is affected.

On the left side of the figure below, the program applies for memory through the glibc library. Note that the program here may also be a C program or a Java program. In many languages, the underlying application memory is still applied through the C library, and the C library applies for memory. There are mainly two forms, one is mmap mapping, which corresponds to the mapped memory area of ​​virtual memory, and the other is to apply for small memory (generally less than 128kb memory) through brk or sbrk. These virtual memories do not have Real allocation, only real physical memory is allocated through page fault interruption when it is actually used.

2b82b5f2f51cf95b14843df36cb65d3d.png
Picture from Geek Time

Classify the memory required for program operation according to the memory type, and form the following mind map:5a038ea80bc40bb735dc2cbb157338f1.png

Key points to pay attention to:

  1. Private anonymous memory, such as the memory we apply for through malloc or calloc, or new.

  2. Shared anonymous memory, if your program writes temporary files in tmpfs, you need to delete them yourself.

  3. Private file mapping, such as reading files through mmap mapping.

  4. Shared file mapping, if you apply for it yourself, you need to release it yourself.

5.1 malloc application memory - anonymous memory test

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#define SIZE 1024*1024*1024

int main (void)
{
  char * p = (char *) malloc(SIZE);
  memset(p,0x0,SIZE);
  while(1) {
       printf("PID:%d\n",getpid());
       sleep(50);
  }
free(p);
return 0;
}

Clean up the memory first, then run this program to check the changes in /proc/meminfo.

root@ubuntu-lab:/home/miao/c-test/mm-test# diff meminfo.old meminfo.new
2,5c2,5
< MemFree:         4217504 kB
< MemAvailable:    4230356 kB
< Buffers:            2040 kB
< Cached:           218572 kB
---
> MemFree:         3165428 kB
> MemAvailable:    3180980 kB
> Buffers:            4396 kB
> Cached:           218776 kB
7,8c7,8
< Active:            37908 kB
< Inactive:         380112 kB
---
> Active:            40424 kB
> Inactive:        1428872 kB
10,12c10,12
< Inactive(anon):   211228 kB
< Active(file):      35272 kB
< Inactive(file):   168884 kB
---
> Inactive(anon):  1259804 kB
> Active(file):      37788 kB
> Inactive(file):   169068 kB
17c17
< Dirty:               204 kB
---
> Dirty:                12 kB
19,20c19,20
< AnonPages:        217032 kB
< Mapped:           213968 kB
---
> AnonPages:       1265628 kB
> Mapped:           213988 kB
22,27c22,27
< KReclaimable:      33828 kB
< Slab:             109880 kB
< SReclaimable:      33828 kB
< SUnreclaim:        76052 kB
< KernelStack:        7472 kB
< PageTables:         5576 kB
---
> KReclaimable:      33832 kB
> Slab:             109808 kB
> SReclaimable:      33832 kB
> SUnreclaim:        75976 kB
> KernelStack:        7456 kB
> PageTables:         7628 kB
32c32
< Committed_AS:    1732340 kB
---
> Committed_AS:    2781300 kB

A few key points:

1. Inactive(anon) 增加1GB的非活跃匿名内存;
2. Committed_AS 分配的内存增加了1GB;
3. Inactive 非活跃匿名内存增加1GB;
4. AnonPages 匿名内存页增加了1GB;
5. MemAvailable 和MemFree 减少了1GB。

5.2 mmap applies for private anonymous memory

#include <stdlib.h>
#include <stdio.h>
#include <strings.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#define MEMSIZE 1024*1024*1024
#define MPFILE "./mmapfile"

int main()
{
    void *ptr;
    int fd;
    fd = open(MPFILE, O_RDWR);
    if (fd < 0) {
        perror("open()");
        exit(1);
    }
// 匿名方式申请的时候会忽略最后两个参数的
    ptr = mmap(NULL, MEMSIZE, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, fd, 0);
    if (ptr == NULL) {
        perror("mmap()");
        exit(1);
    }
    printf("%p\n", ptr);
    bzero(ptr, MEMSIZE);
    printf("pid=%d\n", getpid());
    sleep(50);
    munmap(ptr, MEMSIZE);
    close(fd);
    exit(1);
}

The result is the same as above.

5.3 mmap applies for anonymous shared mapping

Similar to the above code, only one line of code is different:

ptr = mmap(NULL, MEMSIZE, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_ANONYMOUS, fd, 0);

Main changes in memory:

MemFree: 空闲内存减少1GB。
MemAvailable: 可用内存减少1GB。
Cached:   缓存增加1GB。
Inactive: 增加了1GB。
Inactive(anon): 增加1GB。
Mapped: 增加1GB。
Shmem: 共享内存增加了1GB。
Committed_AS: 申请内存增加了1GB。

5.4 mmap applies for private file mapping memory

Similar to the code above, except:

ptr = mmap(NULL, MEMSIZE, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);

Main changes in memory:

1. MemFree 空闲内存少了2GB。
2. MemAvailable 内存少了1GB,因为缓存是可以释放的。
3. Cached 增加了1GB。
4. Inactive 增加了2GB。
5. Inactive(anon) 增加了1GB。
6. Inactive(file) 增加了1GB。
7. AnonPages 增加了1GB
8. Committed_AS 增加了1GB。

Private file mapping, in the process memory, you can see that it occupies Inactive (file) memory, so it will also occupy Inactive (anon), because the private file mapping is very special. When it is written, it will not be synchronized to the background file. Up, copy-on-write is used, and a copy will be copied to physical memory (anonymous memory) when writing.

5.5 mmap application shared file mapping

The code is similar to the above difference:

ptr = mmap(NULL, MEMSIZE, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);

The result is as follows:

1. MemFree 少1GB。
2. Cached 增加了1GB。
3. Inactive 增加了1GB。
4. Inactive(file) 增加了1GB。
5. Mapped 增加了1GB。

Note that only the mmap of shared memory will be in Mapped memory. Shared memory mapping counts as Cache, so it has been added. Because the file is read for the first time, it is Inactive (file), so 1GB has been added.

Note that this approach has two useful points:

  1. The mapped memory is shared, so it can be shared among multiple processes.

  2. After writing or modifying the mapped memory, the system will automatically synchronize to the corresponding file, which is very useful.

Summarize

  1. As long as it is a private mmap mapping, it seems to the system that it is an anonymous page.

  2. As long as it is a shared mmap mapping, the memory occupied by the system is MMaped memory.

5.6 shm shared memory

code show as below:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <string.h>
#define MEMSIZE 1024*1024*1024
int
main()
{
    int shmid;
    char *ptr;
    pid_t pid;
    struct shmid_ds buf;
    int ret;
   // 创建1GB大小权限为0600的共享内存
    shmid = shmget(IPC_PRIVATE, MEMSIZE, 0600);
    if (shmid<0) {
        perror("shmget()");
        exit(1);
    }
   // 获取共享内存信息复制到buf中,包括权限大小等
    ret = shmctl(shmid, IPC_STAT, &buf);
    if (ret < 0) {
        perror("shmctl()");
        exit(1);
    }
    printf("shmid: %d\n", shmid);
    printf("shmsize: %d\n", buf.shm_segsz);
 
    pid = fork();
    if (pid<0) {
        perror("fork()");
        exit(1);
    }
   // 子进程
    if (pid==0) {
        // 将共享内存映射到本进程的内存空间
        ptr = shmat(shmid, NULL, 0);
        if (ptr==(void*)-1) {
            perror("shmat()");
            exit(1);
        }
        bzero(ptr, MEMSIZE);
         // 拷贝hello到里面去
        strcpy(ptr, "Hello!");
        exit(0);
    } else {
       // 等子进程写入结束
        wait(NULL);
     // 将共享内存映射到本进程的内存空间
        ptr = shmat(shmid, NULL, 0);
        if (ptr==(void*)-1) {
            perror("shmat()");
            exit(1);
        }
      // 输出退出
        puts(ptr);
        exit(0);
    }
}

Notice:

  1. The code does not call int shmdt(const void * shmadr);to clean up the association between the shared memory and the process;

  2. The code does not call shmctl的IPC_RMID删除共享内存to delete the memory, so the shared memory will still be occupied after the program runs, as shown below:

root@ubuntu-lab:/home/miao/c-test/mm-test# ipcs -m
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x000631ba 0          postgres   600        56         6                       
0x00000000 3          root       600        1073741824 0

Continue to look at the changes in memory:

1. MemFree 和MemAvailable 减少了1GB。
2. Cached 占用增加了1GB,可见shm是属于cache的。
3. Inactive 增加了1GB。
4. Inactive(anon) 增加了1GB。
5. Shmem 增加了1GB。
6. Committed_AS增加了1GB。

shm is regarded as a memory page based on the tmpfs file system. Since it is based on the file system, it is not considered an anonymous page, so it is not included in the AnonPages in /proc/meminfo.

Clean up shared memory:

root@ubuntu-lab:/home/miao/c-test/mm-test# ipcrm -m 3
root@ubuntu-lab:/home/miao/c-test/mm-test# ipcs -m

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x000631ba 0          postgres   600        56         6

5.7 tmpfs

test:

mkdir /tmp/tmpfs
mount -t tmpfs -o size=2G none /tmp/tmpfs/

#占用空间
root@ubuntu-lab:/home/miao/c-test/mm-test# dd if=/dev/zero of=/tmp/tmpfs/testfile bs=1G count=1

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.15495 s, 340 MB/s
root@ubuntu-lab:/home/miao/c-test/mm-test# 
root@ubuntu-lab:/home/miao/c-test/mm-test# df -h
none                               2.0G  1.0G  1.0G  50% /tmp/tmpfs

Memory changes:

1. MemFree 和MemAvailable 减少了1GB。
2. Cached 占用增加了1GB,可见shm是属于cache的。
3. Inactive 增加了1GB。
4. Inactive(anon) 增加了1GB。
5. Shmem 增加了1GB。
6. Committed_AS增加了1GB。

The same memory change as shm, pay attention to use: echo 3 > /proc/sys/vm/drop_caches it will not release the memory, and free -hyou can see that there is 1GB of space through .

root@ubuntu-lab:/home/miao/c-test/mm-test# free -h
               total        used        free      shared  buff/cache   available
Mem:           4.7Gi       474Mi       3.0Gi       1.0Gi       1.2Gi       3.0Gi

clean up:

rm /tmp/tmpfs/testfile 
umount  /tmp/tmpfs/

reference:

[linux内存占用分析之meminfo - SegmentFault 思否](https://segmentfault.com/a/1190000022518282)
[/proc/meminfo之谜 | Linux Performance](http://linuxperf.com/?p=142)
代码来自:深入浅出Linux 内核管理和调试
[https://www.jianshu.com/p/eece39beee20](https://www.jianshu.com/p/eece39beee20)

Guess you like

Origin blog.csdn.net/mseaspring/article/details/125713656