mysql server CPU usage is too high optimization scheme

        In the past few days, when the system receives the third-party order status push from 7:00 p.m. to 9:00 p.m., there is always a response timeout, because these days happen to be JD.com's event 618, because we cooperate with JD.com, receive JD.com's order was pushed to our platform. At first, I thought that the order volume was too large and the tomcat server could not handle it. I used top, jmap, and remote jvisualvm to conduct remote monitoring and found that the server's cpu and memory usage were very low. , It doesn't look like the problem at all, so I checked the server status of the database and found that the cpu usage of the mysql server is always more than 300%. Basically, there is a problem with the mysql server. It is determined that there is a problem with the lack of indexes in some statistical sql statements. By using the show full PROCESSLIST command to continuously refresh, you can see some sql statements whose state is Copy to tmp table and Sending data, and then start to optimize one by one, in fact, one by one Add an index (because the amount of data was small before, so there has been no problem. Now that the amount of data is large, many sql statements will expose problems and need to be optimized), after adding the index, the CPU usage will be obvious, basically no Hundreds of cases occur, and then refresh the show full PROCESSLIST command to continue optimization. Of course, in addition to some indexing, there are also a small number of SQL statements optimized by changing the code (for example, the data type of parameter assignment must be consistent with the field type defined by the database, if it is an associated query, the data type and encoding of the associated field , the length must be the same, otherwise it is very likely that the index will not be used), mainly to add the index correctly. In addition, I also learned a little: the mysql functions now () and current_date() change in real time, and mysql will not put the query results in the query cache, thus reducing the hit rate of the query cache.

The state value of the specific show full PROCESSLIST command can be analyzed through the following URL, which is clearly written:

http://www.cnblogs.com/huangye-dream/archive/2013/05/30/3108298.html

I also quote the original text :

 

Perform state analysis

Sleep state

Usually means that the resource is not released. If it is through the connection pool, the sleep state should be constant within a certain range

Practical example: Because the database connection is not closed in time when the front-end data is output (especially output to the user terminal), a large number of sleep connections are generated due to the network connection speed. When the network speed is abnormal, the database too many connections hangs up.

Simple interpretation, data query and execution usually takes less than 0.01 seconds, while network output usually takes about 1 second or even longer. The original data connection can be released in 0.01 seconds, but because the front-end program does not perform the close operation, the result is directly output. Then the database connection remains in the sleep state until the result is not displayed on the user's desktop!

Waiting for net, reading from net, writing to net

Occasionally it's okay

If there is a large number of occurrences, quickly check the network connection status and traffic from the database to the front end

Case : Due to the plug-in program, the intranet database is read in large quantities, and the fast exchange used by the intranet quickly fills up, causing a large number of connections to be blocked in waiting for net , and the database connection crashes due to too many connections.

Locked state

There are update operations locked

Usually using innodb can reduce the generation of locked state, but remember, the update operation must use the index correctly, even the low frequency update operation can not be neglected. As shown in the example of the impact result set above.

In the era of myisam , locked is a nightmare for many high-concurrency applications. So MySQL officials also began to tend to recommend innodb .

Copy to tmp table

If the index and the existing structure cannot cover the query conditions, a temporary table will be established to meet the query requirements, resulting in huge and terrifying I/O pressure.

Terrible search statements can lead to such a situation. If it is data analysis, or periodic data cleaning tasks in the middle of the night, it is allowed to appear occasionally. Frequent occurrences must be optimized.

Copy to tmp table is usually related to table join query. It is recommended to gradually get used to not using join table query.

Practical example:

u  A community database is blocked, ask for help, after investigation, there are multiple database applications and websites on its server, one of which is a small website database that is not commonly used produces a terrible copy to tmp table operation, causing the entire hard disk I/O and CPU pressure overload . Kill off the operation and restore everything.

Sending data

Sending data is not sending data. Don't be deceived by the name. This is the process of obtaining data from physical disks. If you have a large number of affected result sets, you need to extract data from different disk fragments.

Occasionally appearing in this state is okay to connect.

Returning to the problem of affecting the result set above, generally speaking, if there are too many connections of sending data, it is usually because the result set of a query is too large, that is, the index item of the query is not optimized enough.

If a large number of similar SQL statements appear in the show proesslist list, and they are all in the sending data state, optimize the query index, and remember to think in terms of affecting the result set.

Storing result to query cache

If this state occurs, if it occurs frequently, use set profiling to analyze it. If the proportion of resource overhead in the overall SQL overhead is too large (even if it is a very small overhead, depending on the proportion), it means that there are many query cache fragments .

Use flush query cache to clean up immediately, or make scheduled tasks

The Query cache parameter can be set as appropriate.

Freeing items

In theory, there won't be a lot of this stuff. Occasionally

If there is a large amount of memory, the hard disk may have problems. For example, the hard disk is full or damaged.

When the i/o pressure is too high, the execution time of Free items may also be long.

Sorting for …

Similar to Sending data , the result set is too large, the sorting condition is not indexed, it needs to be sorted in memory, and even a temporary structure needs to be created for sorting.

other

There are still many states, if you encounter them, go check the information. Basically we encounter less blocking in other states, so don't care

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326226437&siteId=291194637