Reprint! ! ! Eight ways to optimize MySQL database (classic must-see)

1. Select the most applicable field attribute

MySQL can support large data access well, but in general, the smaller the table in the database, the faster the query will be executed on it. Therefore, when creating a table, in order to obtain better performance, we can set the width of the fields in the table as small as possible.

For example, when defining the field of postal code, if it is set to CHAR(255), it obviously adds unnecessary space to the database, and even using VARCHAR is redundant, because CHAR(6) can be very good. Mission accomplished. Also, if possible, we should use MEDIUMINT instead of BIGIN to define integer fields.

Another way to improve efficiency is to set the field to NOTNULL when possible, so that the database does not have to compare NULL values ​​when executing queries in the future. 
For some text fields, such as "province" or "gender", we can define them as ENUM type. Because in MySQL, ENUM types are treated as numeric data, and numeric data is processed much faster than text types. In this way, we can improve the performance of the database again.

2. Use join (JOIN) instead of sub-queries (Sub-Queries)

MySQL supports subqueries in SQL since 4.1. This technique uses a SELECT statement to create a single-column query result, which can then be used as a filter condition in another query. For example, if we want to delete customers who do not have any orders in the customer basic information table, we can use a subquery to first extract all the customer IDs that issued orders from the sales information table, and then pass the results to the main query, as shown below :

DELETEFROMcustomerinfo

WHERECustomerIDNOTin(SELECTCustomerIDFROMsalesinfo)

Using subqueries can complete many SQL operations that logically require multiple steps to complete at one time, and can also avoid transaction or table locking, and it is easy to write. However, in some cases, subqueries can be replaced by more efficient joins (JOINs). For example, suppose we want to fetch all users who have no order records, which can be done with the following query:

SELECT*FROMcustomerinfo

WHERECustomerIDNOTin(SELECTCustomerIDFROMsalesinfo)

If you use join (JOIN) .. to do this query work, it will be much faster. Especially when there is an index on CustomerID in the salesinfo table, the performance will be better. The query is as follows:

SELECT*FROMcustomerinfo

LEFTJOINsalesinfoONcustomerinfo.CustomerID=salesinfo.CustomerID

WHEREsalesinfo.CustomerIDISNULL

JOIN .. is more efficient because MySQL doesn't need to create a temporary table in memory to do this logically two-step query.

3. Use UNION instead of manually created temporary tables

MySQL supports union query since version 4.0, which can combine two or more select queries that need to use temporary tables into one query. At the end of the client's query session, the temporary table will be automatically deleted, so as to ensure that the database is tidy and efficient. When using union to create a query, we only need to use UNION as a keyword to connect multiple select statements. It should be noted that the number of fields in all select statements must be the same. The following example demonstrates a query using UNION.

SELECTName,PhoneFROMclientUNION

SELECTName,BirthDateFROMauthorUNION

SELECTName,SupplierFROMproduct

4. Affairs

Although we can create a wide variety of queries using Sub-Queries, JOINs, and UNIONs, not all database operations can be done with just one or a few SQL statements of. More often, it is necessary to use a series of statements to complete a certain work. But in this case, when one of the statements in the block fails, the operation of the entire block becomes indeterminate. Imagine that if you want to insert a certain data into two related tables at the same time, there may be such a situation: after the successful update in the first table, the database suddenly has an unexpected situation, causing the operation in the second table to not be completed. , in this way, the data will be incomplete, and even the data in the database will be destroyed. To avoid this situation, you should use a transaction, which has the effect of either succeeding or failing every statement in the statement block. In other words, the consistency and integrity of the data in the database can be maintained. Things start with the BEGIN keyword and end with the COMMIT keyword. A SQL operation in between fails, then, the ROLLBACK command can restore the database to the state before BEGIN began.

BEGIN; INSERTINTOsalesinfoSETCustomerID=14;UPDATEinventorySETQuantity=11WHEREitem=’book’;COMMIT;

Another important role of transaction is that when multiple users use the same data source at the same time, it can use the method of locking the database to provide users with a secure access method, which can ensure that the user's operations are not interfered by other users. .

5. Lock table

Although transaction is a very good way to maintain database integrity, but because of its exclusiveness, it can sometimes affect database performance, especially in very large application systems. Since the database will be locked during the execution of the transaction, other user requests can only wait temporarily until the transaction ends. If a database system is used by only a few users, the impact of transactions will not be a big problem; but suppose there are thousands of users accessing a database system at the same time, such as visiting an e-commerce website, it will Severe response delay.

In fact, in some cases we can get better performance by locking the table. The following example uses the method of locking the table to complete the function of the transaction in the previous example.

LOCKTABLEinventoryWRITESELECTQuantityFROMinventoryWHEREItem=’book’;

UPDATEinventorySETQuantity=11WHEREItem=’book’;UNLOCKTABLES

Here, we use a select statement to fetch the initial data, do some calculations, and update the table with the new values ​​with an update statement. A LOCKTABLE statement that includes the WRITE keyword ensures that no other accesses will be made to insert, update, or delete the inventory until the UNLOCKTABLES command is executed.

6. Use foreign keys

The method of locking the table can maintain the integrity of the data, but it cannot guarantee the associativity of the data. This time we can use foreign keys.

For example, a foreign key can ensure that every sales record points to an existing customer. Here, the foreign key can map the CustomerID in the customerinfo table to the CustomerID in the salesinfo table, and any record without a valid CustomerID will not be updated or inserted into salesinfo.

CREATETABLEcustomerinfo( CustomerIDINTNOTNULL,PRIMARYKEY(CustomerID))TYPE=INNODB;

CREATETABLEsalesinfo( SalesIDINTNOTNULL,CustomerIDINTNOTNULL,

PRIMARYKEY(CustomerID,SalesID),

FOREIGNKEY(CustomerID)REFERENCEScustomerinfo(CustomerID)ONDELETECASCADE)TYPE=INNODB; 
Note the parameter "ONDELETECASCADE" in the example. This parameter ensures that when a customer record in the customerinfo table is deleted, all records related to the customer in the salesinfo table will also be automatically deleted. If you want to use foreign keys in MySQL, you must remember to define the table type as transaction-safe table InnoDB type when you create the table. This type is not the default type for MySQL tables. The way to define it is to add TYPE=INNODB to the CREATETABLE statement. as shown in the example.

7. Use indexes

Indexing is a common way to improve database performance, it allows the database server to retrieve specific rows much faster than without an index, especially when the query contains MAX(), MIN() and ORDERBY commands. The performance improvement is more obvious.

Which fields should be indexed?

In general, indexes should be built on those fields that will be used for JOIN, WHERE and ORDERBY sorting. Try not to index a field in the database that contains a lot of duplicate values. For an ENUM type field, a large number of duplicate values ​​are likely to occur

For example the "province".. field in customerinfo, indexing on such a field will not help; on the contrary, it may also reduce the performance of the database. We can create appropriate indexes at the same time when we create the table, or we can use ALTERTABLE or CREATEINDEX to create indexes later. Additionally, MySQL supports full-text indexing and searching since version 3.23.23. Full-text index is a FULLTEXT type index in MySQL, but can only be used for MyISAM type tables. For a large database, loading the data into a table without a FULLTEXT index and then using ALTERTABLE or CREATEINDEX to create the index will be very fast. But if the data is loaded into a table that already has a FULLTEXT index, the execution process will be very slow.

8. Optimized query statements

In most cases, using an index can improve the speed of the query, but if the SQL statement is not used properly, the index will not play its due role.

Here are a few aspects that should be noted.

First, it's best to compare operations between fields of the same type.

This was even a required condition before MySQL version 3.23. For example, an indexed INT field cannot be compared with a BIGINT field; but as a special case, a CHAR type field and a VARCHAR type field can be compared when the field size is the same.

Second, try not to use functions to operate on indexed fields.

For example, using the YEAE() function on a field of type DATE will prevent the index from functioning as it should. So, although the following two queries return the same results, the latter is much faster than the former.

Third, when searching for character fields, we sometimes use LIKE keywords and wildcards, which is simple but at the expense of system performance. 
For example the following query will compare every record in the table.

SELECT*FROMbooks

WHEREname is like "MySQL%" 
but if you use the following query instead, the result will be the same, but the speed will be much faster:

SELECT*FROMbooks

WHEREname>=”MySQL”andname<”MySQM” 

Finally, care should be taken to avoid having MySQL perform automatic type conversions in queries, as the conversion process also renders indexes ineffective.


Reprinted in:

      https://blog.csdn.net/tototuzuoquan/article/details/80089748



Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325811033&siteId=291194637
Recommended