Slow sql Mapping and Analysis

1, see the slow query log is open

mysql > show variables like '%slow_query_log';

 

 

2, the figure shows the logs are slowly closed, using the following command to open;

mysql > set global slow_query_log='ON';

3, the query again, slowly open log

 

 

4, view logs slow threshold time, the display is 10s;

mysql > show variables like '%long_query_time%';

 

 

5, set the slow query time is 3s;

mysql> set global long_query_time = 3; sql execution time is greater than 3s will be written to slow the log

 

 

6, we can use the built-in MySQL slow query log mysqldumpslow statistical tool (this tool is a Perl script, you need to install Perl).

 Mysqldumpslow command specific parameters are as follows:

  -s: adoption order sort of way, there are several ways to sort. Are C (visits), t (query time), l (lock time), r (return records), ac (average number of queries), al (average lock-up time), ar (average return the number of records) and at ( The average query time). Which at the default sort order.

 -t: Returns to the previous N data.

-g: it can be followed by a regular expression, not case sensitive.

For example, we want to sort by query time to see the first two SQL statements, you can write:

perl mysqldumpslow.pl -s t -t 2 "C:\ProgramData\MySQL\MySQL Server 8.0\Data\DESKTOP-4BK02RP-slow.log"

 

 


After to see turned slow query log, and set the corresponding slow query time threshold, SQL statements, as long as the query time is greater than this threshold will be saved in the slow query log, then we can extract the SQL you want to find by mysqldumpslow tool the statement.

 

7, view the execution plan using EXPLAIN

EXPLAIN SELECT comment_id, product_id, comment_text, product_comment.user_id, user_name FROM product_comment JOIN user on product_comment.user_id = user.user_id

 

 


EXPLAIN can help us understand the reading sequence data table, the SELECT clause of the type of access type, data tables, indexes may be used, the index actually used, the length of the index used matches a connection condition table, the optimizer additional information as well as the number of lines of inquiry (such as whether to use an external sort, whether to use a temporary table, etc.) and so on.

SQL execution order is based, i.e. the greater the first implementation id id descending performed, when the same id, performed from top to bottom.

Access data tables corresponding to the type column is the information we are more concerned about. type may have the following situations:

 

 

 
In those cases where, all the worst case, because the use of Full table scan. index and all the same, just index the index table full scan, the benefits of doing so is no longer necessary to sort data, but the cost is still high. If we see Using index in Extral column, indicating that the use of the index covering, that is, the index can cover the required SELECT field, you do not need to back to the table, thus reducing the overhead data lookup.

 

Guess you like

Origin www.cnblogs.com/yanpan/p/11490567.html