MySQL core parameter optimization

1. Database server configuration

  • CPU:48C
  • Memory: 128G
  • DISK:3.2TSSD

2. CPU optimization

  • innodb_thread_concurrency=32
    Indicates that after the SQL is parsed, 32 threads are allowed to fetch data from the innodb engine at the same time. If the number exceeds 32, it needs to be queued; if the
    value is too large, hot data will be generated, and global lock contention will be serious, affecting performance.

3. Memory optimization

  • query_cache_type=0
  • query_cache_size=0
    Cache query, disabled by default in 5.6, implemented at the application layer, such as MC, redis

4. IO optimization

  • 1. innodb_buffer_pool_size=50G
    Similar to SGA, the upper limit of the total IO processing capacity is measured, which is generally 60%-70% of the physical memory. Here, 128G deploys 2 instances, and the remaining 28G is allocated to OS and mysql connections, etc.
  • 2. innodb_io_capacity=20000
    The upper limit of IO data processed by background processes per second is generally 75% of the total IO QPS capacity. For
    example, SSD is 3W QPS, 75% is about 2W, and dual instances are halved, which is 1W, and several instances are divided by a few
  • 3. innodb_log_files_in_group=4
    Several innodb redo log log groups
  • 4. The innodb_log_file_size=1000M
    redo log log is written in a loop, and the production must be larger than 1G.
    If it is too small, innodb_buffer_pool_sizethe data may not be written to the redo log in time, resulting in halt waiting; check whether it is enough? If the value is greater than 0, increase the parameter or increase the log group

    root@master 12:51:  [(none)]> show global status like '%log_wait%';
    +------------------+-------+
    | Variable_name    | Value |
    +------------------+-------+
    | Innodb_log_waits | 0     |
    +------------------+-------+
    1 row in set (0.00 sec)
    root@master 12:54:  [(none)]> show global status like '%Innodb_os_log_written%';
    +-----------------------+-------+
    | Variable_name         | Value |
    +-----------------------+-------+
    | Innodb_os_log_written | 1024  |
    +-----------------------+-------+
    1 row in set (0.00 sec)
    #此参数大小可作为设置日志文件size大小参考值
  • 5. innodb_flush_method=O_DIRECT
    SSD directly writes to the hard disk, not the hard disk cache, that is, bypassing fsync() to brush the hard disk
  • 6. innodb_max_dirty_pages_pct=50
    When the dirty block reaches innodb_buffer_pool_size50%, trigger a checkpoint and write to disk
  • 7. innodb_file_per_table=on
    One table and one file can avoid IO competition in shared table space
  • 8. innodb_page_size=4k
    The default is 16K, here is the SSD, it needs to be erased before writing to the SSD, the erasing unit is extent, an extent consists of 128 pages, 16 128 > 4 128, the efficiency will be higher
  • 9 innodb_flush_neighbors=0
    SSD is set to 0, SAS is turned on to refresh adjacent blocks, and random access is converted to sequential access

5. Connection optimization

  • 1.back_log=300
  • The default is 50, the number of TCP/IP connections, one connection occupies 256KB of memory, the maximum is 64MB, 256 * 300 = 75MB of memory
  • related to the three-way handshakeimage
  • Syn_queue takes 64 and the largest tcp_max_sync_backlog, the default is 1024, this parameter will be limited when a lot of connections come in, otherwise it will consume resources if it is too large
  • The accept queue takes back_logthe somaxconnsmallest sum, which is used to prevent packet loss. When many connections come in and reach the upper limit, the connection will time out and trigger the retransmission mechanism.
  • When 3000 connections come in, the queue accept queue is full, and the application has not had time to remove the request from the queue, and the remaining 2700 connections will be rejected, each time a request is taken (one connection, one connection per thread in mysql) , will create a thread thread

    net.ipv4.tcp_max_sync_backlog= 8192  类似活动场所
    sync接收队列的长度,默认是1024,当mysql在很短时间内得到很多的请求,需要增加,太大会消耗资源
    太小的话会在show processlist出现未认证错误
    net.core.somaxconn=1024   类似活动场所中的座位数
    尽可能防止丢包,超过这个值会触发超时或者重传,限制在net.ipv4.ip_local_port_range这个范围之内
  • 2.max_connections=3000
  • The creation and destruction of connections require system resources, such as memory, file handles
  • How much concurrency is supported by the business refers to the number of requests per second, that is, QPS
  • The parallel SQL at the same time is innodb_thread_concurrencydetermined by the maximum value.
  • If a user's request data exceeds 64MB (such as sorting), it will apply for temporary space and put it on the hard disk
  • If 3000 users connect to mysql at the same time, the minimum required memory is 3000 256KB=750M, and the maximum required memory is 3000 64MB=192G. If it innodb_buffer_pool_sizeis 80GB, the available memory is less than 48G, and 192GB>48GB will generate SWAP, which will affect performance.
  • If the number of connections is too high, it does not necessarily bring about an increase in throughput, and it may take up more system resources.
  • A DB 3W QPS calculation, there are 100 web servers in the front end, each web server needs 300 QPS, the time taken by each QPS = network round-trip time + SQL execution time, calculated in 20ms, requires 6 connections (300/1000/ 20ms=6)
  • Example 1: There are 100 web servers, the maximum number of connections for PHP/JAVA can be set to: 3000/100=30
  • Example 2: There are 30 web servers. To expand the capacity to 60, how to configure the number of web server connections? The maximum number of connections to the web server: 3000/30=100 before, now 3000/60=50
  • 3.max_user_connections=2980
  • The remaining connections are used for management
  • 4.table_open_cache=1024
  • Open the cache of the table, it has nothing to do with the number of tables
  • 1000 connections need to access table A, then 1000 tables will be opened. Opening 1000 tables means that mysql creates 1000 objects of this table, and the connection directly accesses the table object. It is similar to this table as a class, 1000 Each connection accesses this table object. When the table object is gone, create a new one without opening the physical table every time.

    root@master 14:44:  [(none)]> show variables like '%table_open_cache';
    +------------------+-------+
    | Variable_name    | Value |
    +------------------+-------+
    | table_open_cache | 1024  |
    +------------------+-------+
    1 row in set (0.00 sec)
    root@master 14:46:  [(none)]> show global status like 'open%tables%'; 
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | Open_tables   | 19    |
    | Opened_tables | 113   |
    +---------------+-------+
    2 rows in set (0.00 sec)
  • Consider setting to max_connectionseither max_connections*查询同时用到的表个数or
  • 5.thread_cache_size=512
  • All short connections come in and are prone to short connection storms
  • Session layer: transaction state, authentication session
  • Connection layer: network connection, packet transmission
  • A user corresponds to a session corresponds to a connection
  • connection - thread: operating system call
  • 3000 users come in to use 512 threads of the cache, and put them back when they are used up to avoid the overhead of creating and destroying threads
  • 6.wait_timeout=120
  • It means that after the app application is connected to mysql for operation, it will be disconnected after 120 seconds of idle time.
  • 7.interactive_timeout=120
  • It means that after the mysql client connects to mysql for operation, it disconnects after 120 seconds of idle time.

6. Optimization of data consistency

  • 1.innodb_flush_log_at_trx_commit=1
  • 0, whether there is a commit or not, it is written to the binlog log every second
  • 1. Every time a transaction is submitted, the content of the log buffer will be written to the disk, and the log file will be refreshed to the disk, which is the best security.
  • 2. Every time a transaction is committed, it is written to the operating system cache and flushed to the disk by the OS, with the best performance
  • https://blog.csdn.net/zengxuewen2045/article/details/51476186
  • 2.sync_binlog=1
  • 0, after the transaction is committed, mysql does not do fsync and other flushes, and the file system decides what to put on the disk
  • n, how many commits, every n commits to persistent disk
  • Production is set to 1
  • 3. Log writing process
  • image
  • 1) Three update sessions, operation logs generated by all three threads
  • 2) After commit is submitted to the public cache, the three processes cannot see each other's operation content
  • 3) Write to the standard I/O cache through write, that is, the file system handle, thread cache
  • 4) If you need to let other threads see the file handle content, you need to flush to the globally visible file system cache through flush
  • 5) The last and most important step is to sync the memory data to the disk

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325379090&siteId=291194637