mysql query speed increased by 10000+ times, it is so simple

Reference article:

MYSQL performance tuning (two) EXPLAIN/DESC

MYSQL performance tuning (1) slow query log

 

One, the problem

Have you ever seen a joint table query that takes 3300+ seconds? Today, our slow query log captured one. Let’s take anatomy to see what the evil is.

 

2. Analysis and optimization

1. First, let’s check the execution plan of the culprit

EXPLAIN
SELECT 
        r.id
        ,si.set_id
        ,m.project_id 应添加项目列表
FROM report r 
        INNER JOIN application a ON r.app_id=a.id
        INNER JOIN application_sample s ON a.id=s.app_id
        INNER JOIN application_sample_item si ON s.id=si.sample_id                            
        INNER JOIN set_project_mapping m ON si.set_id=m.set_id
WHERE r.org_id =54 AND r.report_status=2 AND r.del=0 AND r.barcode <> '' 
        AND r.add_date BETWEEN '2020-11-01' AND '2020-11-02' 
        AND a.del=0 AND a.application_status=4;

From the execution plan, we can find that there are two tables m and si that use a full table scan, and according to the number of rows that need to be scanned in these two tables estimated according to the execution plan, according to the Cartesian product cross, the scan order of magnitude is reached 60 million+, and then associated with other tables, the result is an explosion

 

2. At this point, we can basically determine that the problem lies in the scan of the two tables.

So, we try to optimize these two tables and add indexes for the related fields of the two tables

ALTER TABLE `set_project_mapping` ADD INDEX set_id( `set_id` ) ;
ALTER TABLE `application_sample_item` ADD INDEX index_sample( `sample_id` ) ;

 

Then, re-execute the query to see if it improves

The query time has been increased from 3000+ seconds to 2.829s, the effect is still very significant

 

3. However, according to product requirements and slow query settings, it is still a slow query after 2s and needs to continue to be optimized

Look at the execution plan

From the execution plan, we can find that the primary key index of the previous s table has failed. Why does the primary key index fail?

Comparing the two plans carefully, we can initially find that the execution order of the two is different. Before, the si table was executed first, and then the primary key index id of the s table was directly checked through the si table sample_id; now, it has become the first to check the s table. Primary key index failure

We can use SHOW WARNINGS; view the sql statement optimized by the mysql optimizer

We can be surprised to find that the optimized sql related table fields have changed, which is inconsistent with the sql we wrote. When querying the s table, it is directly related to the r table, and the app_id is actually used.

Therefore, we add an ordinary index to the app_id of application_sample

ALTER TABLE `application_sample` ADD INDEX index_app( `app_id` ) ;

 

At the same time, we adjust our sql according to the optimizer to optimize sql

SELECT 
        r.id
        ,si.set_id
        ,m.project_id 应添加项目列表
FROM report r 
        INNER JOIN application a ON r.app_id=a.id
        INNER JOIN application_sample s ON r.app_id=s.app_id
        INNER JOIN application_sample_item si ON si.sample_id=s.id                            
        INNER JOIN set_project_mapping m ON si.set_id=m.set_id
WHERE r.org_id =54 AND r.report_status=2 AND r.del=0 AND r.barcode <> '' 
        AND r.add_date BETWEEN '2020-11-01' AND '2020-11-02' 
        AND a.del=0 AND a.application_status=4
;

 

4. Look at the execution plan again, using the ref common index

 

Execute the query, the execution time is 0.203s

 

At this point, our goal has been achieved, and the query speed has increased from 3000+s to about 0.2s

Although the 10,000 times here is a bit exaggerated, after all, the cardinality is too large, but it is not difficult to see that proper indexing is still a huge performance improvement

Guess you like

Origin blog.csdn.net/kk_gods/article/details/112276809