10 ways to tune MySQL performance

 

MYSQL should be the most popular WEB backend database. The WEB development language has developed very fast recently. PHP, Ruby, Python, and Java have their own characteristics. Although NOSQL has been mentioned more and more recently, I believe that most architects will still choose MYSQL for data storage.

 

MYSQL is so convenient and stable. So that we rarely think about it when we develop WEB programs. Even thinking about optimization is at the program level, eg. Don't write SQL statements that are too resource-intensive. But beyond that, there are still a lot of things that can be optimized on the entire system.

MYSQL tuning and usage must read

1. Choose the appropriate storage engine: InnoDB

Unless your data table is used for read-only or full-text search (I believe that when it comes to full-text search, no one will use MYSQL anymore). You should choose InnoDB by default.

You may find that MyISAM is faster than InnoDB when you test it yourself. This is because: MyISAM only caches indexes, while InnoDB caches data and indexes, and MyISAM does not support transactions. But suppose you get close read performance (a hundredfold difference) with innodb_flush_log_at_trx_commit = 2.

 

1.1 How to convert an existing MyISAM database to InnoDB:

mysql -u [USER_NAME] -p -e "SHOW TABLES IN [DATABASE_NAME];" | tail -n +2 | xargs -I '{}' echo "ALTER TABLE {} ENGINE=InnoDB;" > alter_table.sql
perl -p -i -e 's/(search_[a-z_]+ ENGINE=)InnoDB//1MyISAM/g' alter_table.sql
mysql -u [USER_NAME] -p [DATABASE_NAME] < alter_table.sql

1.2 Create InnoDB FILE for each table separately:

innodb_file_per_table=1

This ensures that the ibdata1 file is not too large. out of control. Especially when running mysqlcheck -o --all-databases.

2. Guaranteed to read data from memory. Speak data is kept in memory

2.1 Large enough innodb_buffer_pool_size

It is recommended to store all data in innodb_buffer_pool_size, that is, plan the capacity of innodb_buffer_pool_size according to the storage amount. This way you can read data completely from memory. Minimize disk operations.

 

2.1.1 How to determine innodb_buffer_pool_size is large enough. Is the data read from memory instead of hard disk?

method 1

mysql> SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages_%';
+----------------------------------+--------+
| Variable_name                    | Value  |
+----------------------------------+--------+
| Innodb_buffer_pool_pages_data    | 129037 |
| Innodb_buffer_pool_pages_dirty   | 362    |
| Innodb_buffer_pool_pages_flushed | 9998   |
| Innodb_buffer_pool_pages_free    | 0      |  !!!!!!!!
| Innodb_buffer_pool_pages_misc    | 2035   |
| Innodb_buffer_pool_pages_total   | 131072 |
+----------------------------------+--------+
6 rows in set (0.00 sec)

It is found that Innodb_buffer_pool_pages_free is 0, indicating that the buffer pool has been used up, and innodb_buffer_pool_size needs to be increased

Several other parameters of InnoDB:

innodb_additional_mem_pool_size = 1/200 of buffer_pool
innodb_max_dirty_pages_pct 80%

Method 2

Or use the iostat -d -x -k 1 command to view the operation of the hard disk.

2.1.2 Whether there is enough memory on the server for planning

Run echo 1 > /proc/sys/vm/drop_caches to clear the OS's file cache. Be able to see the real memory usage.

2.2 Data warm-up

By default, only a certain piece of data is read once, it will be cached in innodb_buffer_pool. Therefore, the database has just been started, and data preheating needs to be performed to cache all the data on the disk into the memory.

Data warm-up can improve read speed.

For InnoDB databases, the following methods can be used to preheat data:

1. Save the following script as MakeSelectQueriesToLoad.sql

SELECT DISTINCT
    CONCAT('SELECT ',ndxcollist,' FROM ',db,'.',tb,
    ' ORDER BY ',ndxcollist,';') SelectQueryToLoadCache
    FROM
    (
        SELECT
            engine,table_schema db,table_name tb,
            index_name,GROUP_CONCAT(column_name ORDER BY seq_in_index) ndxcollist
        FROM
        (
            SELECT
                B.engine,A.table_schema,A.table_name,
                A.index_name,A.column_name,A.seq_in_index
            FROM
                information_schema.statistics A INNER JOIN
                (
                    SELECT engine,table_schema,table_name
                    FROM information_schema.tables WHERE
                    engine='InnoDB'
                ) B USING (table_schema,table_name)
            WHERE B.table_schema NOT IN ('information_schema','mysql')
            ORDER BY table_schema,table_name,index_name,seq_in_index
        ) A
        GROUP BY table_schema,table_name,index_name
    ) AA
ORDER BY db,tb
;

2. Run

mysql -uroot -AN < /root/MakeSelectQueriesToLoad.sql > /root/SelectQueriesToLoad.sql

3. Every time the database is restarted, or when the entire database needs to be warmed up before backup, run:

mysql -uroot < /root/SelectQueriesToLoad.sql > /dev/null 2>&1

2.3 Don't store data in SWAP

Assuming a dedicated MYSQL server. SWAP can be disabled, assuming a shared server, make sure innodb_buffer_pool_size is large enough. Or use a fixed memory space for caching and use the memlock instruction.

 

3. Regularly optimize and rebuild the database

mysqlcheck -o --all-databases will keep ibdata1 growing. The real optimization is only to rebuild the data table structure:

CREATE TABLE mydb.mytablenew LIKE mydb.mytable;
INSERT INTO mydb.mytablenew SELECT * FROM mydb.mytable;
ALTER TABLE mydb.mytable RENAME mydb.mytablezap;
ALTER TABLE mydb.mytablenew RENAME mydb.mytable;
DROP TABLE mydb.mytablezap;

4. Reduce disk write operations

4.1 Use a large enough write cache innodb_log_file_size

But need to pay attention to the assumption of 1G innodb_log_file_size. If the server crashes. It takes 10 minutes to recover.

 

It is recommended that innodb_log_file_size be set to 0.25 * innodb_buffer_pool_size

4.2 innodb_flush_log_at_trx_commit

This option is closely related to write to disk operations:

innodb_flush_log_at_trx_commit = 1 then each change is written to disk
innodb_flush_log_at_trx_commit = 0/2 is written to disk every second

Assuming your application does not involve very high security (financial systems), or the infrastructure is secure enough, or the transactions are very small, you can use 0 or 2 to reduce disk operations.

 

4.3 Avoid Double Write Buffering

innodb_flush_method=O_DIRECT

5. Improve disk read and write speed

RAID0 Especially when using virtual disks (EBS) such as EC2, it is important to use soft RAID0.

6. Make the most of indexes

6.1 View existing table structure and indexes

SHOW CREATE TABLE db1.tb1/G

6.2 Add the necessary indexes

Indexes are the only way to improve query speed. For example, the inverted index used by search engines is the same principle.

The addition of the index needs to be determined according to the query. For example, through the slow query log or query log, or through the EXPLAIN command to analyze the query.

ADD UNIQUE INDEX
ADD INDEX
6.2.1 For example, optimize the user authentication table:

add index

ALTER TABLE users ADD UNIQUE INDEX username_ndx (username);
ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (username,password);

Every time the server is restarted for data preheating

echo “select username,password from users;” > /var/lib/mysql/upcache.sql

Add startup script to my.cnf

[mysqld]
init-file=/var/lib/mysql/upcache.sql
6.2.2 Use your own active indexing framework or your own active splitting table structure framework

For example. Rails is such a framework. It will automatically join the index itself. A framework like Drupal will automatically split the table structure by itself.

Will point you in the right direction early in your development. Therefore, it is actually a bad practice for less experienced people to pursue building from 0 at the beginning.

 

7. Analyze query logs and slow query logs

All queries are logged. This is very useful in ORM systems or systems that generate query statements.

log=/var/log/mysql.log

Be careful not to use it in a production environment. Otherwise it will fill up your disk space.

 

Log queries that take longer than 1 second to run:

long_query_time=1
log-slow-queries=/var/log/mysql/log-slow-queries.log

8. Radical approach. use memory disk

The reliability of the infrastructure is now very high, for example, EC2 almost does not have to worry about server hardware downtime. And memory is really cheap. It is very easy to buy a server with dozens of gigabytes of memory, which can use memory disks. Back up to disk regularly.

 

Migrate MYSQL folder to 4G RAM disk

mkdir -p /mnt/ramdisk
sudo mount -t tmpfs -o size=4000M tmpfs /mnt/ramdisk/
mv /var/lib/mysql /mnt/ramdisk/mysql
ln -s /tmp/ramdisk/mysql /var/lib/mysql
chown mysql:mysql mysql

9. Use MYSQL in a NOSQL way

B-TREE is still one of the most efficient indexes, and all MYSQL is still not out of date.

 

Use HandlerSocket to skip the SQL parsing layer of MYSQL. MYSQL really becomes NOSQL.

10. Others

  • Add LIMIT 1 at the end of a single query to stop the full table scan.
  • Separate non-"indexed" data, such as separate storage of large articles, without affecting other self-active queries.
  • Do not use MYSQL built-in functions. Because the built-in function does not build a query cache.
  • PHP's connection establishment speed is very fast, all without connection pooling. Otherwise, the number of connections may be exceeded. Of course, the PHP program without the connection pool may also
  • The number of connections is full using @ignore_user_abort(TRUE);
  • Use IP instead of domain name for database path. Avoid DNS resolution issues

MYSQL should be the most popular WEB backend database. The WEB development language has developed very fast recently. PHP, Ruby, Python, and Java have their own characteristics. Although NOSQL has been mentioned more and more recently, I believe that most architects will still choose MYSQL for data storage.

 

MYSQL is so convenient and stable. So that we rarely think about it when we develop WEB programs. Even thinking about optimization is at the program level, eg. Don't write SQL statements that are too resource-intensive. But beyond that, there are still a lot of things that can be optimized on the entire system.

MYSQL tuning and usage must read

1. Choose the appropriate storage engine: InnoDB

Unless your data table is used for read-only or full-text search (I believe that when it comes to full-text search, no one will use MYSQL anymore). You should choose InnoDB by default.

You may find that MyISAM is faster than InnoDB when you test it yourself. This is because: MyISAM only caches indexes, while InnoDB caches data and indexes, and MyISAM does not support transactions. But suppose you get close read performance (a hundredfold difference) with innodb_flush_log_at_trx_commit = 2.

 

1.1 How to convert an existing MyISAM database to InnoDB:

mysql -u [USER_NAME] -p -e "SHOW TABLES IN [DATABASE_NAME];" | tail -n +2 | xargs -I '{}' echo "ALTER TABLE {} ENGINE=InnoDB;" > alter_table.sql
perl -p -i -e 's/(search_[a-z_]+ ENGINE=)InnoDB//1MyISAM/g' alter_table.sql
mysql -u [USER_NAME] -p [DATABASE_NAME] < alter_table.sql

1.2 Create InnoDB FILE for each table separately:

innodb_file_per_table=1

This ensures that the ibdata1 file is not too large. out of control. Especially when running mysqlcheck -o --all-databases.

2. Guaranteed to read data from memory. Speak data is kept in memory

2.1 Large enough innodb_buffer_pool_size

It is recommended to store all data in innodb_buffer_pool_size, that is, plan the capacity of innodb_buffer_pool_size according to the storage amount. This way you can read data completely from memory. Minimize disk operations.

 

2.1.1 How to determine innodb_buffer_pool_size is large enough. Is the data read from memory instead of hard disk?

method 1

mysql> SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages_%';
+----------------------------------+--------+
| Variable_name                    | Value  |
+----------------------------------+--------+
| Innodb_buffer_pool_pages_data    | 129037 |
| Innodb_buffer_pool_pages_dirty   | 362    |
| Innodb_buffer_pool_pages_flushed | 9998   |
| Innodb_buffer_pool_pages_free    | 0      |  !!!!!!!!
| Innodb_buffer_pool_pages_misc    | 2035   |
| Innodb_buffer_pool_pages_total   | 131072 |
+----------------------------------+--------+
6 rows in set (0.00 sec)

It is found that Innodb_buffer_pool_pages_free is 0, indicating that the buffer pool has been used up, and innodb_buffer_pool_size needs to be increased

Several other parameters of InnoDB:

innodb_additional_mem_pool_size = 1/200 of buffer_pool
innodb_max_dirty_pages_pct 80%

Method 2

Or use the iostat -d -x -k 1 command to view the operation of the hard disk.

2.1.2 Whether there is enough memory on the server for planning

Run echo 1 > /proc/sys/vm/drop_caches to clear the OS's file cache. Be able to see the real memory usage.

2.2 Data warm-up

By default, only a certain piece of data is read once, it will be cached in innodb_buffer_pool. Therefore, the database has just been started, and data preheating needs to be performed to cache all the data on the disk into the memory.

Data warm-up can improve read speed.

For InnoDB databases, the following methods can be used to preheat data:

1. Save the following script as MakeSelectQueriesToLoad.sql

SELECT DISTINCT
    CONCAT('SELECT ',ndxcollist,' FROM ',db,'.',tb,
    ' ORDER BY ',ndxcollist,';') SelectQueryToLoadCache
    FROM
    (
        SELECT
            engine,table_schema db,table_name tb,
            index_name,GROUP_CONCAT(column_name ORDER BY seq_in_index) ndxcollist
        FROM
        (
            SELECT
                B.engine,A.table_schema,A.table_name,
                A.index_name,A.column_name,A.seq_in_index
            FROM
                information_schema.statistics A INNER JOIN
                (
                    SELECT engine,table_schema,table_name
                    FROM information_schema.tables WHERE
                    engine='InnoDB'
                ) B USING (table_schema,table_name)
            WHERE B.table_schema NOT IN ('information_schema','mysql')
            ORDER BY table_schema,table_name,index_name,seq_in_index
        ) A
        GROUP BY table_schema,table_name,index_name
    ) AA
ORDER BY db,tb
;

2. Run

mysql -uroot -AN < /root/MakeSelectQueriesToLoad.sql > /root/SelectQueriesToLoad.sql

3. Every time the database is restarted, or when the entire database needs to be warmed up before backup, run:

mysql -uroot < /root/SelectQueriesToLoad.sql > /dev/null 2>&1

2.3 Don't store data in SWAP

Assuming a dedicated MYSQL server. SWAP can be disabled, assuming a shared server, make sure innodb_buffer_pool_size is large enough. Or use a fixed memory space for caching and use the memlock instruction.

 

3. Regularly optimize and rebuild the database

mysqlcheck -o --all-databases will keep ibdata1 growing. The real optimization is only to rebuild the data table structure:

CREATE TABLE mydb.mytablenew LIKE mydb.mytable;
INSERT INTO mydb.mytablenew SELECT * FROM mydb.mytable;
ALTER TABLE mydb.mytable RENAME mydb.mytablezap;
ALTER TABLE mydb.mytablenew RENAME mydb.mytable;
DROP TABLE mydb.mytablezap;

4. Reduce disk write operations

4.1 Use a large enough write cache innodb_log_file_size

But need to pay attention to the assumption of 1G innodb_log_file_size. If the server crashes. It takes 10 minutes to recover.

 

It is recommended that innodb_log_file_size be set to 0.25 * innodb_buffer_pool_size

4.2 innodb_flush_log_at_trx_commit

This option is closely related to write to disk operations:

innodb_flush_log_at_trx_commit = 1 then each change is written to disk
innodb_flush_log_at_trx_commit = 0/2 is written to disk every second

Assuming your application does not involve very high security (financial systems), or the infrastructure is secure enough, or the transactions are very small, you can use 0 or 2 to reduce disk operations.

 

4.3 Avoid Double Write Buffering

innodb_flush_method=O_DIRECT

5. Improve disk read and write speed

RAID0 Especially when using virtual disks (EBS) such as EC2, it is important to use soft RAID0.

6. Make the most of indexes

6.1 View existing table structure and indexes

SHOW CREATE TABLE db1.tb1/G

6.2 Add the necessary indexes

Indexes are the only way to improve query speed. For example, the inverted index used by search engines is the same principle.

The addition of the index needs to be determined according to the query. For example, through the slow query log or query log, or through the EXPLAIN command to analyze the query.

ADD UNIQUE INDEX
ADD INDEX
6.2.1 For example, optimize the user authentication table:

add index

ALTER TABLE users ADD UNIQUE INDEX username_ndx (username);
ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (username,password);

Every time the server is restarted for data preheating

echo “select username,password from users;” > /var/lib/mysql/upcache.sql

Add startup script to my.cnf

[mysqld]
init-file=/var/lib/mysql/upcache.sql
6.2.2 Use your own active indexing framework or your own active splitting table structure framework

For example. Rails is such a framework. It will automatically join the index itself. A framework like Drupal will automatically split the table structure by itself.

Will point you in the right direction early in your development. Therefore, it is actually a bad practice for less experienced people to pursue building from 0 at the beginning.

 

7. Analyze query logs and slow query logs

All queries are logged. This is very useful in ORM systems or systems that generate query statements.

log=/var/log/mysql.log

Be careful not to use it in a production environment. Otherwise it will fill up your disk space.

 

Log queries that take longer than 1 second to run:

long_query_time=1
log-slow-queries=/var/log/mysql/log-slow-queries.log

8. Radical approach. use memory disk

The reliability of the infrastructure is now very high, for example, EC2 almost does not have to worry about server hardware downtime. And memory is really cheap. It is very easy to buy a server with dozens of gigabytes of memory, which can use memory disks. Back up to disk regularly.

 

Migrate MYSQL folder to 4G RAM disk

mkdir -p /mnt/ramdisk
sudo mount -t tmpfs -o size=4000M tmpfs /mnt/ramdisk/
mv /var/lib/mysql /mnt/ramdisk/mysql
ln -s /tmp/ramdisk/mysql /var/lib/mysql
chown mysql:mysql mysql

9. Use MYSQL in a NOSQL way

B-TREE is still one of the most efficient indexes, and all MYSQL is still not out of date.

 

Use HandlerSocket to skip the SQL parsing layer of MYSQL. MYSQL really becomes NOSQL.

10. Others

  • Add LIMIT 1 at the end of a single query to stop the full table scan.
  • Separate non-"indexed" data, such as separate storage of large articles, without affecting other self-active queries.
  • Do not use MYSQL built-in functions. Because the built-in function does not build a query cache.
  • PHP's connection establishment speed is very fast, all without connection pooling. Otherwise, the number of connections may be exceeded. Of course, the PHP program without the connection pool may also
  • The number of connections is full using @ignore_user_abort(TRUE);
  • Use IP instead of domain name for database path. Avoid DNS resolution issues

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324522618&siteId=291194637