Detailed SQL execution process

1. When a user executes an SQL statement on the client, the client sends the SQL statement to the server, and the server process will process the client's SQL statement.

2. After the server process collects the SQL information, it will allocate the required memory in the process global area PGA and store related login information.

3. After the client transmits the SQL statement to the server, the server process will analyze the statement. This parsing work is carried out on the server side, and the parsing process can be refined.

insert image description here

(1) Query cache

When the server process receives the SQL statement sent by the client, it will not directly query the database. The server process converts the characters of this SQL statement into ASCII equivalent digital codes, and then the ASCII code is passed to a HASH function, and returns a hash value, and then the server process will go to the cache in the shared pool to find out whether Exist the same hash value. If it exists, the server process will use the parsed version of the statement that has been cached in the cache of the shared pool to execute, which is soft parsing. If it does not exist in the cache, you need to perform the steps behind the above figure, which is hard parsing. Hard parsing usually accounts for about 60% of the entire SQL execution time, and hard parsing will generate execution trees, execution plans, and so on. Therefore, the use of high-speed data cache can improve the query efficiency of SQL statements. The main reason is: on the one hand, reading data from the memory is much more efficient than reading data from the data file in the hard disk; on the other hand, it saves a lot of time by avoiding statement parsing.

(2) Grammar check

When the corresponding SQL statement cannot be found in the cache, the server process will start to check the validity of the statement. This is mainly to check the syntax of the SQL statement to see if it complies with the syntax rules. If the server process thinks that this SQL statement does not conform to the grammatical rules, it will feed back this error message to the client. In the process of syntax checking, the table name, column name, etc. included in the SQL statement will not be checked, but the syntax will be checked.

(3) Semantic check

If the SQL statement conforms to the grammatical definition, the server process will then analyze the objects involved in the statement, such as tables, indexes, views, etc., and check the names and related structures of these objects against the data dictionary. , views, etc. are in the database. If the table name and column name are inaccurate, the database will feed back an error message to the client.

(4) Obtain object analysis lock

In order to ensure data consistency, we prevent other users from modifying this object during the query process. The system will lock the objects we need to query.

(5) Confirmation of data access rights

After the syntax and semantics are checked, the client may not be able to obtain the data, and the server process will also check whether the connecting user has the permission to access the data. If the user does not have data access rights, the client cannot obtain the data. Note that the database server process checks syntax and semantics before access permissions are checked.

(6) Generate optimal execution plan

When the syntax, semantics, and permission checks all pass, the server process will optimize the statement according to certain rules (such as cost-based). Finalize the lowest possible cost execution plan.

4. Bind variable assignment

If a bind variable is used in the SQL statement, scan the declaration of the bind variable, assign a value to the bind variable, and bring the variable value into the execution plan.

5. Statement execution

Statement parsing is just to parse the syntax of the SQL statement to ensure that the server can know what the statement really means. The database server process will not actually execute the SQL statement until the statement parsing is completed.

(1) For the SELECT statement:

First, the server process first needs to determine whether the required data exists in the db buffer, and if it exists and is available, it will be obtained directly

Second, if the data is not in the buffer, the server process will query the relevant data from the database file and put the data into the data buffer (buffer cache).

(2) For insert, delete, update statements:

First, check whether the required data has been read into the buffer cache. If the buffer cache already exists, go directly to the steps

A If the required data is not in the buffer cache, the server reads the data block from the data file into the buffer cache;

B locks the data row obtained by the table to be modified, and then obtains an exclusive lock on the data row to be modified;

C copies the Redo record of the data to the redo log buffer;

D generates undo data for data modification;

E modify the db buffer;

F dbwr writes modifications to data files;

Second, the server reads data from the data file to the db buffer through the following steps:

A First, the server process will request a TM lock at the head of the table (to ensure that other users cannot modify the structure of the table during the execution of this transaction). If the TM lock is successfully added, then request some row-level locks (TX locks). If the lock is successfully locked, then start to read data from the data file.

B Before reading data, prepare buffer space for the read file. The server process needs to scan the LRU list to find the free db buffer. During the scanning process, the server process will register all the modified db buffers found in the dirty list. If the free db buffer and non-dirty data block buffers are insufficient, dbwr will be triggered to write the buffer blocks pointed to in the dirty buffer to the data file, and clean up these buffers to make room for buffering newly read data.

C has found enough free buffers, and the server process will read in each data block (db block) where these lines are located from the data file (DB BLOCK is the smallest operation unit of ORACLE, even if the data you want is only a lot in DB BLOCK One or several rows in the row, ORACLE will also read all the rows in this DB BLOCK into the Oracle DB BUFFER) into the free area of ​​the db buffer or overwrite the non-dirty data block buffer that has been squeezed out of the LRU list , and arranged at the head of the LRU list, that is, before the data block is put into the db buffer, the latch in the db buffer must be applied for first, and the data can be read into the db buffer only after the lock is successfully locked. If the data block already exists in the db buffer cache (sometimes called db buffer or db cache), even if a non-dirty cache data block with no transaction and SCN smaller than itself is found in the db buffer, the server process still has to go to the head of the table The department applies to lock this record, and the subsequent action can only be performed if the lock is successful. If it is unsuccessful, it must wait for the previous process to be unlocked before the action can be performed (the block at this time is tx lock block).

Third, record the redo log

A After the data is read into the db buffer, the server process writes the rowid of the row data affected by the statement and read into the db buffer, the original and new values ​​to be updated, scn and other information from the PGA one by one In the redo log buffer. Before writing to the redo log buffer, it is necessary to request the latch of the redo log buffer in advance, and start writing after the lock is successfully locked.

B When the write reaches one-third of the redo log buffer size or the write volume reaches 1M or more than three seconds later or when a checkpoint occurs or occurs before dbwr, the lgwr process will be triggered to write the data of the redo log buffer to the disk on the disk In the redo file file (at this time, a log file sync waiting event will be generated).

C The latch held by the redo log buffer that has been written to the redo file will be released and can be overwritten by subsequent write information, and the redo log buffer is recycled. The redo file is also used cyclically. When a redo file is full, the lgwr process will automatically switch to the next redo file (at this time, there may be a log file switch (check point complete) waiting event). If it is in archive mode, the archive process will also write the contents of the previous full redo file to the archive log file (log file switch (archiving needed) may appear at this time.

Fourth, create undo information for the transaction

A After completing all related redo log buffers of this transaction, the server process starts to rewrite the block header transaction list of this db buffer and writes it into scn (at first, scn is written in redo log buffer, not in db buffer ).

B Then copy the data copy containing the header transaction list and scn information of this block into the rollback segment, and call the information in the rollback segment at this time the "pre-image" of the data block. This "pre-image" is used for Later rollback, recovery, and consistent reads. (The rollback segment can be stored in a special rollback tablespace, which consists of one or more physical files and is dedicated to the rollback tablespace, and the rollback segment can also be opened in data files in other tablespaces ).

Fifth |, modify the information into the data file

A rewrites the data content of the db buffer block, and writes the address of the rollback segment at the head of the block.

B puts the db buffer pointer into the dirty list. If a row of data is updated multiple times without committing, there will be multiple "pre-images" in the rollback segment. Except for the first "pre-image" containing scn information, the header of each other "pre-image" Both have scn information and "before-before-image" rollback segment addresses. An update only corresponds to one scn, and then the server process will create a pointer to this db buffer block in the dirty list (so that the dbwr process can find the db buffer data block of the dirty list and write it into the data file). Then the server process will continue to read the second data block from the data file, repeat the actions of the previous data block, read in the data block, record the log, create a rollback segment, modify the data block, and put it into the dirty list.

C When the length of the dirty queue reaches the threshold (usually 25%), the server process will notify dbwr to write out the dirty data, that is, to release the latch on the db buffer and free up more free db buffer. I have been explaining that oracle reads one data block at a time. In fact, oracle can read multiple data blocks at a time (db_file_multiblock_read_count is used to set the number of blocks read at a time)

Sixth, when executing commit

1) commit triggers the lgwr process, but does not force dbwr to immediately release all corresponding db buffer block locks. That is to say, it is possible that although it has been committed, dbwr is still writing the data block involved in this sql statement within a certain period of time. The row lock at the head of the table is not released immediately after the commit, but is released only after the dbwr process is completed. This may cause a user to fail to request resources that have been committed by another user.

2) The time between Commit and the end of the dbwr process is very short. If it happens to be after the commit, the power is turned off before the dbwr ends, because the data after the commit already belongs to the content of the data file, but this part of the file has not been completely written into the data in the file. So need to roll forward. Since the commit has triggered lgwr, all these changes that cannot be written to the data file in the future will be rolled forward by the smon process according to the redo log file after the instance is restarted, and the unfinished work of the previous commit is completed (that is, the changes are written to the data file) .

3) If the power is turned off without committing, because the data has been changed in the db buffer and there is no commit, it means that this part of data does not belong to the data file. Because lgwr is triggered before dbwr, that is, as long as the data changes (there must be a log first), all dbwr modifications on the data file will be recorded in the redo log file first. After the instance restarts, the SMON process will go back and forth according to the redo log file roll.

In fact, the roll forward and rollback of smon is completed according to the checkpoint. When a full checkpoint occurs, first let the LGWR process write all the buffers in the redologbuffer (including uncommitted redo information) to the redo log file. Then let the dbwr process write the submitted buffer of dbbuffer to the data file (do not force the uncommitted one to be written). Then update the SCN at the head of the control file and the data file, indicating that the current database is consistent, and there are many transactions between two adjacent checkpoints, both committed and uncommitted.

Seventh, if rollback is executed

The server process will find the corresponding pre-modified copy in the rollback segment according to the transaction list and SCN of the head of the block in the data file and the db buffer, and the address of the rollback segment, and use these original values ​​to restore the modified copy in the current data file but uncommitted changes. If there are multiple "pre-images", the server process will find the rollback segment address of the "pre-pre-image" at the head of a "pre-image", until it finds the earliest "pre-image" under the same transaction. Once a commit is issued, the user cannot rollback, which ensures that the follow-up actions of the dbwr process that have not been fully completed after the commit are guaranteed.

Supongo que te gusta

Origin blog.csdn.net/u014212540/article/details/129421731
Recomendado
Clasificación