MySQL memory will continue to increase, increasing a little every day until it reaches 100%

mysql:Server version: 8.0.25 MySQL Community Server - GPL

At present, the memory usage of the main database database reaches about 80%. By observing the memory usage of the main database, there is a slight upward trend every day.

The innodb_buffer_pool_size is only set to 16G. It is suspected that the memory cannot be released normally after the thread connected to the mysql client is disconnected.

First process:

Adjust thread-related memory parameters to be smaller

sort_buffer_size

read_buffer_size

read_rnd_buffer_size

join_buffer_size

binlog_cache_size

tmp_table_size

Restart mysql to release memory and run it for a while, but the problem still exists.

View memory through script:


# cat mem.sh 
#!/bin/sh

# you might want to add some user authentication here
/usr/local/mysql/bin/mysql -uroot -pxxxx -e "show variables; show status" | awk '
{undefined
VAR[$1]=$2
}

END {undefined
MAX_CONN = VAR["max_connections"]
MAX_USED_CONN = VAR["Max_used_connections"]
BASE_MEM=VAR["key_buffer_size"] + VAR["query_cache_size"] + VAR["innodb_buffer_pool_size"] + VAR["innodb_additional_mem_pool_size"] + VAR["innodb_log_buffer_size"]
MEM_PER_CONN=VAR["read_buffer_size"] + VAR["read_rnd_buffer_size"] + VAR["sort_buffer_size"] + VAR["join_buffer_size"] + VAR["binlog_cache_size"] + VAR["thread_stack"] + VAR["tmp_table_size"]
MEM_TOTAL_MIN=BASE_MEM + MEM_PER_CONN*MAX_USED_CONN
MEM_TOTAL_MAX=BASE_MEM + MEM_PER_CONN*MAX_CONN

printf "+------------------------------------------+--------------------+\n"
printf "| %40s | %15.3f MB |\n", "key_buffer_size", VAR["key_buffer_size"]/1048576
printf "| %40s | %15.3f MB |\n", "query_cache_size", VAR["query_cache_size"]/1048576
printf "| %40s | %15.3f MB |\n", "innodb_buffer_pool_size", VAR["innodb_buffer_pool_size"]/1048576
printf "| %40s | %15.3f MB |\n", "innodb_additional_mem_pool_size", VAR["innodb_additional_mem_pool_size"]/1048576
printf "| %40s | %15.3f MB |\n", "innodb_log_buffer_size", VAR["innodb_log_buffer_size"]/1048576
printf "+------------------------------------------+--------------------+\n"
printf "| %40s | %15.3f MB |\n", "BASE MEMORY", BASE_MEM/1048576
printf "+------------------------------------------+--------------------+\n"
printf "| %40s | %15.3f MB |\n", "sort_buffer_size", VAR["sort_buffer_size"]/1048576
printf "| %40s | %15.3f MB |\n", "read_buffer_size", VAR["read_buffer_size"]/1048576
printf "| %40s | %15.3f MB |\n", "read_rnd_buffer_size", VAR["read_rnd_buffer_size"]/1048576
printf "| %40s | %15.3f MB |\n", "join_buffer_size", VAR["join_buffer_size"]/1048576
printf "| %40s | %15.3f MB |\n", "thread_stack", VAR["thread_stack"]/1048576
printf "| %40s | %15.3f MB |\n", "binlog_cache_size", VAR["binlog_cache_size"]/1048576
printf "| %40s | %15.3f MB |\n", "tmp_table_size", VAR["tmp_table_size"]/1048576
printf "+------------------------------------------+--------------------+\n"
printf "| %40s | %15.3f MB |\n", "MEMORY PER CONNECTION", MEM_PER_CONN/1048576
printf "+------------------------------------------+--------------------+\n"

printf "| %40s | %18d |\n", "Max_used_connections", MAX_USED_CONN
printf "| %40s | %18d |\n", "max_connections", MAX_CONN

printf "+------------------------------------------+--------------------+\n"

printf "| %40s | %15.3f MB |\n", "TOTAL (MIN)", MEM_TOTAL_MIN/1048576
printf "| %40s | %15.3f MB |\n", "TOTAL (MAX)", MEM_TOTAL_MAX/1048576

printf "+------------------------------------------+--------------------+\n"

}'

# ./mem.sh 
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------------------------------+--------------------+
|                          key_buffer_size |          32.000 MB |
|                         query_cache_size |           0.000 MB |
|                  innodb_buffer_pool_size |       16384.000 MB |
|          innodb_additional_mem_pool_size |           0.000 MB |
|                   innodb_log_buffer_size |          32.000 MB |
+------------------------------------------+--------------------+
|                              BASE MEMORY |       16448.000 MB |
+------------------------------------------+--------------------+
|                         sort_buffer_size |           1.000 MB |
|                         read_buffer_size |           2.000 MB |
|                     read_rnd_buffer_size |           2.000 MB |
|                         join_buffer_size |           1.000 MB |
|                             thread_stack |           0.500 MB |
|                        binlog_cache_size |           4.000 MB |
|                           tmp_table_size |          16.000 MB |
|                        net_buffer_length |           0.016 MB |
+------------------------------------------+--------------------+
|                    MEMORY PER CONNECTION |          26.516 MB |
+------------------------------------------+--------------------+
|                     Max_used_connections |                840 |
|                          max_connections |               2048 |
+------------------------------------------+--------------------+
|                              TOTAL (MIN) |       38721.125 MB |
|                              TOTAL (MAX) |       70752.000 MB |
+------------------------------------------+--------------------+

According to the script, my maximum number of connections is 840. The maximum memory usage should be 38G (the number of connections under normal circumstances is only about 400), and now the memory occupied by mysqld reaches 56G.

Refer to the official documentation to calculate the memory usage again (8.12.3.1 How MySQL Uses Memory)

MySQL :: MySQL 8.0 Reference Manual :: 8.12.3.1 How MySQL Uses Memory

通过Performance Schemaandsys schemato monitor MySQL memory usage

SELECTSUBSTRING_INDEX(event_name,'/',2)AS code_area, FORMAT_BYTES(SUM(current_alloc))AS current_alloc FROM sys.x$memory_global_by_current_bytes GROUPBYSUBSTRING_INDEX(event_name,'/',2)ORDERBYSUM(current_alloc)DESC;

mysql> SELECT SUBSTRING_INDEX(event_name,'/',2) AS
    ->        code_area, FORMAT_BYTES(SUM(current_alloc))
    ->        AS current_alloc
    ->        FROM sys.x$memory_global_by_current_bytes
    ->        GROUP BY SUBSTRING_INDEX(event_name,'/',2)
    ->        ORDER BY SUM(current_alloc) DESC;
+---------------------------+---------------+
| code_area                 | current_alloc |
+---------------------------+---------------+
| memory/innodb             | 18.72 GiB     |
| memory/sql                | 3.68 GiB      |
| memory/performance_schema | 1.41 GiB      |
| memory/mysys              | 1.31 GiB      |
| memory/temptable          | 846.00 MiB    |
| memory/myisam             | 6.43 MiB      |
| memory/mysqld_openssl     | 6.26 MiB      |
| memory/csv                | 25.79 KiB     |
| memory/mysqlx             | 3.44 KiB      |
| memory/blackhole          |   88 bytes    |
| memory/vio                |   16 bytes    |
+---------------------------+---------------+
发现各个部分内存使用正常,加起来才25.85 GiB 

I suspected that there was a memory leak problem. I went to the official website and found no memory leak related problems in this version. I felt very troubled for a while. I searched on the Internet and found this article: How to troubleshoot that the memory occupied by the mysqld process is too high?

How to troubleshoot if the memory occupied by the mysqld process is too high? _51CTO Blog_Mysqlcpu high usage troubleshooting

The previous troubleshooting is the same, but there is one step later:

5. Caused by defects in glibc’s memory manager itself. In short, after the memory requested by calling glibc is used up, it is not recycled normally when returned to the OS, but becomes fragmented. As the fragments continue to grow, you can see that the memory occupied by the mysqld process continues to increase. At this time, you can call a function to actively recycle and release these fragments.

[root@mysql#] gdb --batch --pid `pidof mysqld` --ex 'call malloc_trim(0)' PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45305 mysql 20 0 28.4g 5.2g 8288 S 2.7 17.0 64:56.82 mysqld This is like actively executing OPTIMIZE TABLE to rebuild the table after too much fragmentation is generated in the InnoDB table.

In order to verify this step, now experiment on the test environment:

Use sysbench to perform a stress test on the database. After a period of time during the stress test, the memory usage of mysqld increased. After stopping the stress test, the memory usage did not decrease, which means that the memory was not released when the thread was closed. Use gdb --batch --pid `pidof mysqld` -- ex 'call malloc_trim(0)' finds that the memory is obviously released. You can also install the jemalloc plug-in to manage the memory.

So I decided to execute it in the production environment at night when the business is at its lowest:

[root@bss-mysql-master logs]# gdb --batch --pid `pidof mysqld` --ex 'call malloc_trim(0)'
[New LWP 32146]
[New LWP 32145]
[New LWP 32144]
[New LWP 32143]
[New LWP 32142]
[New LWP 32141]
[New LWP 32140]
[New LWP 32139]
[New LWP 31937]
[New LWP 31936]
[New LWP 31935]
......
......
......
执行后发现内存使用率明显降下来了
[root@bss-mysql-master logs]# free -m
              total        used        free      shared  buff/cache   available
Mem:          64258       46199         539         672       17519       16710
Swap:             0           0           0

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                                  
42022 mysql     20   0   57.9g  44.3g   9480 S 535.8 70.6 359805:36 mysqld                                                                                                                                                                                                   
 1194 root      20   0  725044  26364   5176 S   7.0  0.0   7948:05 node_exporter                                                                                                                                                                                            
    1 root      20   0  195944   7560   1324 S   0.3  0.0 330:13.06 systemd         

This issue has come to an end, continue to follow up and observe

gdb installation method:

# yum -y install gcc wget texinfo

# wget https://mirrors.tuna.tsinghua.edu.cn/gnu/gdb/gdb-8.1.tar.gz --no-check-certificate

# tar -zxf gdb-8.1.tar.gz

# cd gdb-8.1

# mkdir builddir

# cd builddir

# ../configure

# make && make install

# gdb --version

Memory recovery command: gdb --batch --pid `pidof mysqld` --ex 'call malloc_trim(0)'

Original link: When the database is running for a period of time, mysqld takes up more and more memory, reaching 90% - Tencent Cloud Developer Community - Tencent Cloud

Guess you like

Origin blog.csdn.net/weixin_42272246/article/details/127902705