1. LIMIT statement
Paging query is one of the most frequently used scene, but also usually the most problematic areas. For simple statement such as the following, the general approach is to think of DBA on the type, name, create_time field plus a composite index. Such conditions can be effectively used to sort the index, to elevate performance.
SELECT *
FROM operation
WHERE type = 'SQLStats'
AND name = 'SlowLog'
ORDER BY create_time
LIMIT 1000, 10;
Okay, maybe more than 90% of the DBA to resolve the problem so far. But when the LIMIT clause becomes "LIMIT 1000000,10", programmers will still complain: I only get 10 records Why or slow?
To know the database does not know the article 1 million records from where to start, even if there is also need to calculate the index from scratch again. This performance problems, in most cases is a lazy programmer. Browse the page data in the front end, or a large batch of data to export other scenes, the maximum value may be as a previous parameter as a query. SQL redesigned as follows:
SELECT *
FROM operation
WHERE type = 'SQLStats'
AND name = 'SlowLog'
AND create_time > '2017-03-16 14:00:00'
ORDER BY create_time limit 10;
The basic query time fixed in the new design, not with the amount of data grows and changes.
2. implicit conversion
SQL statement query variables and fields defined type mismatch is another common mistake. For example, the following statement:
mysql> explain extended SELECT *
> FROM my_balance b
> WHERE b.bpn = 14000000123
> AND b.isverified IS NULL ;
mysql> show warnings;
| Warning | 1739 | Cannot use ref access on index 'bpn' due to type or collation conversion on field 'bpn'
Wherein bpn defined fields as varchar (20), MySQL, then the strategy comparison string after converting to digital. Function applied to the table fields, indices failure.
The above application framework may be filled automatically parameter rather than the programmer's intent. Now a lot of very complicated application framework, easy to use but also be careful it might give yourself digging.
3. association update, delete
Although MySQL5.6 the introduction of physical and chemical properties, but it currently only requires special attention for optimizing the query statement. For the need to manually update or delete rewritten JOIN.
For example, the following UPDATE statement, MySQL is the actual implementation cycles / nested subqueries (DEPENDENT SUBQUERY), the execution time can be imagined.
UPDATE operation o
SET status = 'applying'
WHERE o.id IN (SELECT id
FROM (SELECT o.id,
o.status
FROM operation o
WHERE o.group = 123
AND o.status NOT IN ( 'done' )
ORDER BY o.parent,
o.id
LIMIT 1) t);
Implementation plan:
+----+--------------------+-------+-------+---------------+---------+---------+-------+------+-----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+-------+---------------+---------+---------+-------+------+-----------------------------------------------------+
| 1 | PRIMARY | o | index | | PRIMARY | 8 | | 24 | Using where; Using temporary |
| 2 | DEPENDENT SUBQUERY | | | | | | | | Impossible WHERE noticed after reading const tables |
| 3 | DERIVED | o | ref | idx_2,idx_5 | idx_5 | 8 | const | 1 | Using where; Using filesort |
+----+--------------------+-------+-------+---------------+---------+---------+-------+------+-----------------------------------------------------+
After determining a query from the semantic element can be directly pushed down, rewritten as follows:
SELECT target,
Count(*)
FROM operation
WHERE target = 'rm-xxxx'
GROUP BY target
Implementation plan becomes:
+----+-------------+-----------+------+---------------+-------+---------+-------+------+--------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+------+---------------+-------+---------+-------+------+--------------------+
| 1 | SIMPLE | operation | ref | idx_4 | idx_4 | 514 | const | 1 | Using where; Using index |
+----+-------------+-----------+------+---------------+-------+---------+-------+------+--------------------+
7. advance narrow range
First on the original SQL statement:
SELECT *
FROM my_order o
LEFT JOIN my_userinfo u
ON o.uid = u.uid
LEFT JOIN my_productinfo p
ON o.pid = p.pid
WHERE ( o.display = 0 )
AND ( o.ostaus = 1 )
ORDER BY o.selltime DESC
LIMIT 0, 15
The SQL statement is intent: first series connection of the left, and then take 15 records before ordering. It can also be seen from the implementation plan, the final step in sorting the estimated number of 900,000 records, time consuming 12 seconds.
+----+-------------+-------+--------+---------------+---------+---------+-----------------+--------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------+---------+---------+-----------------+--------+----------------------------------------------------+
| 1 | SIMPLE | o | ALL | NULL | NULL | NULL | NULL | 909119 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | o.uid | 1 | NULL |
| 1 | SIMPLE | p | ALL | PRIMARY | NULL | NULL | NULL | 6 | Using where; Using join buffer (Block Nested Loop) |
+----+-------------+-------+--------+---------------+---------+---------+-----------------+--------+----------------------------------------------------+
Since the last WHERE conditions and ordering are for the most left main table, so you can first sort of my_order reduce the amount of data in advance to do a left join. After rewriting the following SQL execution time is reduced to about 1 millisecond.
SELECT *
FROM (
SELECT *
FROM my_order o
WHERE ( o.display = 0 )
AND ( o.ostaus = 1 )
ORDER BY o.selltime DESC
LIMIT 0, 15
) o
LEFT JOIN my_userinfo u
ON o.uid = u.uid
LEFT JOIN my_productinfo p
ON o.pid = p.pid
ORDER BY o.selltime DESC
limit 0, 15
Re-examine the execution plan: after (select_type = DERIVED) subquery materialized participate JOIN. Although the estimated 900,000 still scanning line, but use of the index and LIMIT clause, the actual execution time becomes very small.
+----+-------------+------------+--------+---------------+---------+---------+-------+--------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+---------------+---------+---------+-------+--------+----------------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 15 | Using temporary; Using filesort |
| 1 | PRIMARY | u | eq_ref | PRIMARY | PRIMARY | 4 | o.uid | 1 | NULL |
| 1 | PRIMARY | p | ALL | PRIMARY | NULL | NULL | NULL | 6 | Using where; Using join buffer (Block Nested Loop) |
| 2 | DERIVED | o | index | NULL | idx_1 | 5 | NULL | 909112 | Using where |
+----+-------------+------------+--------+---------------+---------+---------+-------+--------+----------------------------------------------------+
8. The intermediate result sets pushdown
Examples (left main table join in the priority action query) look at the following has been initially optimized:
SELECT a.*,
c.allocated
FROM (
SELECT resourceid
FROM my_distribute d
WHERE isdelete = 0
AND cusmanagercode = '1234567'
ORDER BY salecode limit 20) a
LEFT JOIN
(
SELECT resourcesid, sum(ifnull(allocation, 0) * 12345) allocated
FROM my_resources
GROUP BY resourcesid) c
ON a.resourceid = c.resourcesid
So the statement that there are other problems? C is not difficult to see the whole table subqueries aggregate queries, in a particularly large number of table situation will lead to decreased performance of the entire statement.
In fact, for subqueries c, left the final outcome of the connection data can be set only interested in the main table and resourceid can match. Therefore, we can rewrite the following statements, execution time decreased from the original 2 milliseconds to 2 seconds.
SELECT a.*,
c.allocated
FROM (
SELECT resourceid
FROM my_distribute d
WHERE isdelete = 0
AND cusmanagercode = '1234567'
ORDER BY salecode limit 20) a
LEFT JOIN
(
SELECT resourcesid, sum(ifnull(allocation, 0) * 12345) allocated
FROM my_resources r,
(
SELECT resourceid
FROM my_distribute d
WHERE isdelete = 0
AND cusmanagercode = '1234567'
ORDER BY salecode limit 20) a
WHERE r.resourcesid = a.resourcesid
GROUP BY resourcesid) c
ON a.resourceid = c.resourcesid
But a sub-query appears more than once in our SQL statement. Such an approach not only additional costs, but also makes the entire statement of significant complexity. Rewrite again using the WITH statement:
WITH a AS
(
SELECT resourceid
FROM my_distribute d
WHERE isdelete = 0
AND cusmanagercode = '1234567'
ORDER BY salecode limit 20)
SELECT a.*,
c.allocated
FROM a
LEFT JOIN
(
SELECT resourcesid, sum(ifnull(allocation, 0) * 12345) allocated
FROM my_resources r,
a
WHERE r.resourcesid = a.resourcesid
GROUP BY resourcesid) c
ON a.resourceid = c.resourcesid
9, summary
Database compiler generates the execution plan, determine the actual implementation of SQL. But the compiler just try the service, the compiler of all the databases are not perfect.
Most of the above-mentioned scene, there are performance problems in other databases. Learn database compiler features, its weaknesses in order to avoid regulation, to write high-performance SQL statement.
Programmers in the design of the data model and write SQL statements, thought or consciousness algorithm should be brought in.
Write complex SQL statements to develop the habit of using the WITH statement. Concise and clear thinking of SQL statements can also reduce the burden on the database.