Ten years of JAVA moving bricks - MYSQL optimization advanced knowledge one

MYSQL tuning should have the knowledge

  1. Basic knowledge of MySQL database: understand the basic concepts and operation methods of MySQL architecture, data types, indexes, query statements, etc.

  2. SQL language: Familiar with SQL language, including query statement writing, optimization and debugging.

  3. Database design: Understand database design principles and paradigms, and be able to design efficient database structures.

  4. Index optimization: Master the principle and usage of indexes, be able to reasonably create indexes according to query requirements, and optimize and adjust indexes.

  5. Query optimization: understand the generation process of query execution plans, analyze query performance bottlenecks, optimize query statements and table structures, and improve query efficiency.

  6. Database performance tuning: master the methods and tools of database performance tuning, and be able to locate and solve database performance problems, such as slow queries and deadlocks.

  7. Database monitoring and management: understand database monitoring and management tools, be able to monitor the running status and performance indicators of the database, and perform timely management and adjustment.

MySQL structure

  1. Client: A client is an application or tool that communicates with a MySQL server. It can be a command line interface, a graphical user interface or a web application.

  2. Connection Manager: The connection manager is responsible for handling the connection between the client and the server. It accepts the client's connection request and assigns the request to the server's thread pool.

  3. Query parser: The query parser is responsible for parsing and validating the SQL query statement sent by the client. It converts query statements into internal data structures for subsequent processing.
    (1) Syntactic analysis: MySQL will perform grammatical analysis on the SQL statement to check whether the statement conforms to the grammatical specification.
    (2) Semantic analysis: MySQL will perform semantic analysis on the SQL statement to check whether the statement conforms to the specification of the database schema. For example, check if the table exists, if the columns match, etc.

  4. Optimizer: The optimizer is responsible for analyzing query statements and generating optimal execution plans. It takes into account factors such as indexes, table associations, filter conditions, etc. to improve query performance.
    (3) Query optimization: MySQL will optimize the query of the SQL statement and generate the optimal query plan. The process of query optimization includes index selection, order of associated tables, filter conditions, etc.

  5. Execution engine: The execution engine executes the execution plan generated by the optimizer. It is responsible for reading data from the storage engine, processing the data, and returning the results to the client.
    (4) Query execution: MySQL will execute the query statement according to the query plan, read the data and return the result.

  6. Storage Engine: The storage engine is responsible for managing the storage and retrieval of data. MySQL supports multiple storage engines, such as InnoDB, MyISAM, Memory, etc. Each storage engine has different characteristics and applicable scenarios.
    (5) Locking and transaction processing: MySQL will lock relevant data rows as needed during the execution of query statements to ensure data consistency and integrity. If transaction processing is involved, MySQL also manages and controls transactions.

  7. Log system: The log system records changes to the database, including transaction logs and error logs. It is used for database recovery and troubleshooting.

The MySQL database supports multiple isolation levels to control the access and modification of data by concurrent transactions. The following are the isolation levels for MySQL databases:

1. Read Uncommitted (Read Uncommitted): The lowest level of isolation that allows transactions to read data that has not been committed by other transactions. Problems that can cause dirty reads, non-repeatable reads, and phantom reads.

2. Read Committed (Read Committed): Ensure that transactions can only read committed data. Modifications to data by other transactions are not visible during reads by the current transaction. But it may still cause problems of non-repeatable read and phantom read. (Implementation method: during the read operation, only the data of the submitted transaction can be read, and the transaction will obtain the read lock when reading the data, and other transactions can read the same data concurrently, but they need to obtain the write lock during the write operation. Under the "read submitted" isolation level, read locks are instantaneous, also known as short-lived read locks. When a transaction reads data, it acquires a read lock, but releases it immediately after the read operation is completed Read lock. This means that other transactions can read the same data concurrently without being blocked. The release mechanism of this read lock allows other transactions to modify the data that has been read, which may lead to non-repeatable reads and phantom reads. .)

3. Repeatable Read (Repeatable Read): Ensure that the data read by the transaction during the entire transaction is consistent. Modifications to data by other transactions are not visible during reads by the current transaction. The problem of non-repeatable read is solved, but the problem of phantom read may still be caused. (The implementation adds write locks and shared read locks to the read data. Under the "repeatable read" isolation level, read locks are persistent, also known as long-lived read locks. When a When a transaction reads data, it acquires a read lock and holds the read lock throughout the transaction until the end of the transaction. This means that other transactions cannot simultaneously acquire a write lock or write the data that has been read, maintaining the integrity of the data Consistency and repeatability. Persistent read locks prevent other transactions from modifying the read data, solving the problem of non-repeatable reading)

4. Serializable (Serializable): The highest level of isolation, to ensure serial execution of transactions, to avoid any concurrency problems. The read and write operations between transactions will block each other, solving the problems of dirty reads, non-repeatable reads and phantom reads. (The implementation adds write locks, non-shared read locks, and exclusive locks to the read data.)

What is a dirty read: If the uncommitted data of the transaction is read, the transaction is rolled back, and the read data is dirty data.
What is the non-repeatable read problem: when the same transaction reads the same data twice, the data obtained is inconsistent. The reason is that there is no read lock, which causes the read data to be modified.

**Problem:** At the level of repeatable reading, if all the data read by a transaction is not ended, the transaction is not over. Can other concurrent transactions read data read by previous transactions? Yes.
**Problem:** At the level of repeatable reading, if all the data read by a transaction is not ended, the transaction is not over. Can other concurrent transactions read and modify data read by previous transactions? The answer is no.
**Problem:** At the level of repeatable reading, if all the data read by a transaction is not ended, the transaction is not over. Whether other concurrent transactions can insert data in this table, yes.

Guess you like

Origin blog.csdn.net/weixin_43485737/article/details/132413678