Database -> How to optimize slow queries?

How to optimize slow queries?

Check if the slow log is enabled

  • SHOW VARIABLES LIKE '%slow%';
  • Enable slow query log
方法一: 临时有效
/* 开启慢日志 */
mysql> set global slow_query_log=on;
Query OK, 0 rows affected (0.08 sec)

/* 设置慢查询时间阈值 -> sql查询数据超过了就打印日志 */
mysql> set global long_query_time=3600;
Query OK, 0 rows affected (0.08 sec)

/* 设置控制是否记录未走索引的 SQL 查询 */
mysql> set global log_queries_not_using_indexes=on;
Query OK, 0 rows affected (0.00 sec)

方法二: 永久有效
/* 修改配置文件  在 my.ini 增加几行 */
slow_query_log=ON
slow_query_log_file=/usr/local/mysql/var/localhost-slow.log
long_query_time=0
log-queries-not-using-indexes = 1
// 配置好后,重启mysql服务
  • Check the corresponding slow log
show variables like '%quer%'; /* slow_query_log_file对应的就是日志文件路径 */
  • Look at which sql is slow
  • Use explain SELECT * FROM 表;to parse sql

Look at the SQL statement

  • No index, index failure leads to slow query
    • If the table is very large and there is no index for the fields behind where or order by, it must be very laborious to check this situation.
    • Indexes may also fail, so index failure is also one of the main reasons for slow queries.
    • Index failure is basically a problem with our sql statement
    • For example, fuzzy query with % in front , query fields using functions , calculation operations , type conversion , and different character formats for joint table queries may cause index failure
  • Inappropriate SQL statement
    • SELECT *
    • Sort on non-indexed fields
    • LIMIT 66660,10;
      • After the first query is completed, 66670 data are found for the second time, and the first 66660 are removed, which will slow down the query
      • We can use the where condition to remove the first 66660 direct queries after the first queryLIMIT 10

Guess you like

Origin blog.csdn.net/rod0320/article/details/123492304