mysql: slow sql query

Query slow Sql statement records in Mysql

The slow query log slow_query_log is used to record the sql statement with slow query, and find out which sql statement is slow by querying the log, so that the slow sql can be optimized.

1. Log in to our mysql database:

2. Check whether the current slow query is enabled and the time specified by the slow query:

show variables like 'slow_query_log';

show variableslike 'long_query_time';

3. If the result of your query is in the OFF state, you need to modify it to the ON state through the relevant settings:

set global slow_query_log='ON';

4. Set the slow query tracking time to 1s:

After you set it here, it will take effect after the database is restarted:

5. Set the location where the slow query log file is saved:

set global slow_query_log_file='/var/lib/mysql/test_1116.log';

6. View the following configured files:

sudo subl /var/lib/mysql/test_1116.log

MySQL database slow query troubleshooting method

MySQL slow query performance

It is obvious that most of the application functions are slowed down, but it is not completely unable to work, and there is still a response after waiting for a long time. But the whole system looks very stuck.

Query the number of slow queries

Generally speaking, for a normally running MySQL server, it is normal for slow queries per minute to be in the single digits, and it is not unacceptable to occasionally soar to double digits. If the number is close to 100, the system may have problems, but it can still be used barely. The number of slow queries that have gone wrong these few times has reached more than 1,000.

The number of slow queries is stored in the slow_log table in the mysql library.

SELECT * FROM `slow_log` where start_time > '2022/11/01 00:00:00';

In this way, the slow queries of the day can be found.

Check the status of the query currently in progress

You should often use show processlist to view the queries being executed in the current system. In fact, these data are also stored in the processlist table in the information_schema library. Therefore, if you want to query conditions, it is more convenient to directly query this table.

For example, view all current processes

select * from information_schema.processlist

View the currently running queries and reverse the execution time

select * from information_schema.processlist where info is not null order by time desc

For a normally running database, because a query is executed very quickly, the number of queries whose info is not null captured by our select will be very small. A library with a heavy load like ours generally only finds a few. If you can find dozens of queries with non-empty info at one time, then it can be considered that there is a problem with the system.

System issues and positioning

When we noticed that the system was slowing down, we immediately checked with slow queries and viewing processlist, and found that the number of slow queries per minute soared to more than 1,000, and at the same time, a large number of queries were being executed.

Because the top priority is to restore the normal operation of the system as soon as possible, the most direct way to affect the impact is to check how many and which queries are in the lock state in the query results of processlist, or have been executed for a long time, and kill these processes with the kill command. By continuously killing these processes that may cause system blockage, the system can eventually be temporarily restored to normal status, of course, this is only a stopgap measure.

In addition, the most important thing is of course to analyze which queries cause the system to block. We still use slow queries for analysis.

There are several important indicators in the slow query table query results:

start_time The start time. Use this parameter to match the time when the system goes wrong to locate which queries are the culprit.

query_time query time

The number of results sent by rows_sent and rows_examined and the number of rows scanned by the query are particularly important, especially rows_examined. That basically tells us which queries are the "big" ones to watch out for.

In actual operation, we also analyze the queries with a large number of rows_examined one by one, add indexes, and modify the writing of query statements to completely solve the problem.

Reflect on the reasons for the problem, there are several places to pay attention to:

1. Database problems often do not cause problems immediately after going online, but a cumulative process. The accumulated bad query statements will gradually increase the system load, and the last straw that overwhelms the camel often seems inexplicable

2. The last straw may not even exist at all. It is not a release or a function launch, but an explosion as the number of users increases and the amount of data accumulates

3. Since the emergence of problems is a cumulative process, it is necessary to do a review before each code release

4. The addition of indexes is very important

5. The monitoring of slow queries also needs to be included in the scope of monitoring

Guess you like

Origin blog.csdn.net/bobocqu/article/details/127626910