Summary of possible reasons for MySQL slow SQL in performance testing

Summary of possible reasons for MySQL slow SQL performance test

: 01. The queried table is not indexed

I wrote a query sql, but the query condition field did not have an index, which required a full table scan to find the data. This is what everyone encounters most and is also the easiest to understand.

Generally speaking, when the amount of data in the table is relatively small, such as less than 100,000 levels, it will not feel slow. However, when the amount of data in the table reaches or exceeds 100,000 levels, the query time will be particularly long.

02. The index of the query is invalid

Knowing the index is very important, so when building a table, some indexes are usually added. However, having an index does not mean that the query speed will be faster, because it also depends on whether the index can be used correctly.

The following are common causes of invalid indexes:

Query conditions, no index fields

The query condition uses or and selective filtering conditions, causing the index to be invalid.

The query condition uses like, and fuzzy matching starts from the head, causing the index to be invalid.

The query conditions do not satisfy the leftmost matching principle of the composite index, causing the index to be invalid.

Query conditions, index columns use implicit type conversion, causing the index to be invalid

Query conditions, the index column uses an aggregate function, causing the index to be invalid

Query conditions, index columns use arithmetic operations (+, -,...), causing the index to be invalid.

Query conditions, index columns use logical operations (!=, <>, is null, is not null...), causing the index to be invalid

When the left and right are associated, the field types are inconsistent, causing the index to be invalid.

03. The query uses a temporary table

You may not know about temporary tables, but you may have heard of table query, which means that if one query is not satisfied, you need to check it again, and you need to check it twice to get the result, which of course will be slow.

How are temporary tables generally generated? When the data returned by a query is filtered and displayed in the next step, it is found that the returned data does not meet the filter conditions or there are no fields to be displayed. The original table must be checked again to obtain data that meets the conditions from the original table. , these data are placed in the temporary table. Originally, checking back once would already be a waste of time. However, the temporary table has a space limit and takes up memory space. There may not be enough space to store all the data. Therefore, generally, as long as temporary tables are used, the performance of this SQL is very poor.

04.Join or subquery, too many

Related queries are very common in actual work. The more related tables there are, the more complicated the data filtering will be and the longer it will naturally take. Therefore, generally speaking, it is not recommended to have more than three related tables, and tables with small data volumes should be placed on the left, and large tables should be placed on the right.

05. The amount of query result data is too large

The amount of data in the query results is too large. There are two common types. The first is that the amount of data in the directly checked table is too large, such as tens of millions. A table has a size of tens of millions. Even if an index is built, the index file will be very large and the depth will be very deep. The query speed will naturally be very slow. The second type is that the Cartesian product of the joint table is too large.

For the first type, the optimization suggestion is generally to use table partitioning for the table. The second type is simple and crude SQL split optimization.

06. Lock competition

Nowadays, MySQL tables are generally InnoDB storage engines. The tables of this engine use row locks, locking one row at a time. That is, if a transaction is operating a certain row of data, the operation behavior of this row will be locked, and other transactions cannot operate until the previous transaction operation is completed and the data changes are committed, and subsequent transactions can obtain the operation. This will cause a transaction to make changes but not end, and all subsequent transaction operations will have to wait. If there are multiple transactions waiting in line at this time, and the current transaction operation ends, the waiting transactions will compete for the lock. Once this kind of 'you are unkind, I am unjust' occurs, the performance of SQL will be very slow.

07.limit paging, too deep

Sometimes, when we need to offset a certain amount of data to obtain certain data, it is easy to think of using limit. However, if the offset is large, you will find that SQL execution is very, very slow because of the offset. It will be read into the buffpool in pages. If the amount of data is large, the bufferpool space occupied will be large. The size of this space is configured and is generally not very large, so it leads to slow SQL.

For the optimization of this problem, it is recommended to write a filter condition and then combine it with limit to implement it.

08. Configuration parameters are unreasonable

We often use databases and use them directly after installation without having to understand and set the database configuration parameters too much. In this article, we have mentioned buff many times, which is a very important configuration parameter in the database. In mysql, there are many parameters with the words buff, cache, size, length, max, min, limit, etc. Configuration parameters are very important configuration parameters. These configuration parameters are directly related to the performance of the database. If your database is installed on a machine with high configuration, but you do not know how to modify these configuration parameters, use the default values. I can only lament, "Why is the performance so poor despite such a high hardware configuration?"

09. Frequently brush dirty pages

Dirty pages are inconsistent memory data pages and disk data pages. This usually occurs during data update operations. To update data, you need to read the data first, then update it in the memory, then generate a log file, and then play back the log file to update the table data. When the amount of update data is large, the buffpool is full, or the subsequently generated playback log file is full, the operation process will slow down.

For the optimization of this kind of problem, the general recommendation is to modify in small batches and submit multiple times.

10. System resources are not enough

Databases, used to store data, require frequent disk operations. Therefore, generally, we will choose a machine with better disk IO performance as the database server. At the same time, the database must frequently exchange data, so it also needs to have enough memory, so the memory requirements will be correspondingly higher. These hardware are only the basic requirements for database server hardware selection; the database is also a software, and the software is also installed in the operating system. Therefore, it will also be subject to some restrictions on the parameters of the operating system. Therefore, when the hardware resources are not enough, Or when the system parameter limit value is reached, the operation will also slow down.

Guess you like

Origin blog.csdn.net/spasvo_dr/article/details/132626681