mysql advanced notes

Advanced SQL

storage engine

The storage engine is the implementation of technologies such as storing data, building indexes, and updating/querying data. The storage engine is based on the table, not the library, so the storage engine is also called the table type

Architecture: connection layer, service layer, engine layer, storage layer

  1. When creating a table, specify the storage engine
create table 表名(
	字段1 字段1类型,
	字段2 字段2类型,
	...
	字段n 字段n类型
)ENGINE = INNODB;
  1. View the storage engines supported by the current database
show engines;
  1. specify storage engine
-- 创建表 my_mysisam,并指定myisam存储引擎
create table my_myisam(
	id int,
	name char(10)
)engine = MyISAM;

Storage Engine Features

Please add a picture description

  • InnoDB

    • introduce

      InnoDB is a general-purpose storage engine that takes into account high reliability and high performance. After MYSQL5.5, InnoDB is the default storage engine

    • features

      • DML operations follow the ACID model and support transactions
      • Row-level locks to improve concurrent access performance
      • Support foreign key FORENGN KEY constraints to ensure data integrity and correctness
    • document

      • xxx.ibd: xxx represents the table name. Each table in the innoDB engine will correspond to such a table file space, which stores the table structure (frm, sdi), data and indexes of the table.
      • Parameters: innodb_file_per_table
  • logical storage structure
    Please add a picture description

  • MySAM

    • introduce
      • MySAM is the early default storage engine of MYSQL
    • features
      • Does not support transactions, does not support foreign keys
      • Table locks are supported, but row locks are not supported
      • fast access
    • document
      • xxx.sdi: store table structure information
      • xxx.MYD: storage data
      • xxx.MYI: storage index
  • Memory

    • introduce
      • The table data of the Memory engine is stored in memory. Due to hardware problems or power failure problems, these tables can only be used as temporary tables or caches
    • features
      • memory storage
      • hash index (default)
    • document
      • xxx.sdi: store table structure information

Choice of storage engine

When choosing a storage engine, you should choose a suitable storage engine according to the characteristics of the application system. For complex application systems, multiple storage engines can also be selected for combination according to the actual situation

  • InnoDB : It is the default storage engine of MYSQL and supports transactions and foreign keys. If the application has relatively high requirements on the integrity of the transaction, requires data consistency under concurrent conditions, and data operations include many update and delete operations in addition to insert and query, then the InnoDB storage engine is a more suitable choice

  • MyISAM : If the application is mainly based on read and insert operations, only a few update and delete operations, and the integrity and concurrency of transactions are not very high, then this storage engine is very suitable.

  • MeMory : Store all data in memory, with fast access speed, usually used for temporary tables and caches. The defect of MeMory is that there is a limit on the size of the table, and a table that is too large cannot be cached in memory, and data security cannot be guaranteed.

storage engine application

InnoDB: store core data that requires high transaction and data integrity in the business system

MyISAM: Storage of non-core transactions of business systems

index

​ An index is a data structure (ordered) that helps MYSQL efficiently obtain data . In addition to data, the database system also maintains data structures that satisfy specific search algorithms. These data structures refer to (point to) data in a certain way, so that advanced search algorithms can be implemented on these data structures. This data structure is an index . .

Advantage disadvantage
Improve the efficiency of data retrieval and reduce the IO cost of the database Index columns also take up space
Sort data through index columns, reduce the cost of data sorting, and reduce CPU consumption The index greatly improves the query efficiency, but also reduces the speed of updating the table. For example, when performing INSERT, UPDATE, and DELETE on the table, the efficiency is reduced

index structure

​ MYSQL indexes are implemented at the storage engine layer. Different storage engines have different structures, mainly including the following:

index structure describe
B+Tree index The most common index type, most engines support B+ tree index
Hash index The underlying data structure is implemented with a hash table. Only queries that exactly match index columns are valid, and range queries are not supported.
R-Tree index Spatial index is a special index type of MyISAM engine, mainly used for geospatial data types, usually used less
Full-text index It is a way to quickly match documents by building an inverted index

Note: The index we usually refer to, if not specified, refers to the index organized by the B+ tree structure.

  • B-Tree (multi-way balanced search tree)

Take a b-tree with a maximum degree (max-degree) of 5 as an example (each node stores up to 4 keys and 5 pointers):

Please add a picture description

  • B+Tree

Take a b+tree with a maximum degree of 4 as an example:Please add a picture description

The difference compared to B-Tree:

  1. All data will appear in the leaf nodes
  2. Leaf nodes form a singly linked list

​ The MySQL index data structure optimizes the classic B+Tree. On the basis of the original B+Tree, a linked list pointer pointing to the adjacent leaf nodes is added to form a B+Tree with sequential pointers, which improves the performance of interval access.
Please add a picture description

  • Hash

    The hash index is to use a certain hash algorithm to convert the key value into a new hash value, map it to the corresponding slot, and then store it in the hash table

    If two or more key values ​​are mapped to the same slot, they will have a hash conflict, which can be resolved through a linked list

    • features
      • Hash indexes can only be used for peer-to-peer comparisons (=, in), and range queries (between, <, >, ...) are not supported
      • Unable to complete the sort operation using the index
      • The query efficiency is high, usually only one search is required, and the efficiency is higher than that of the B+Tree index
    • Storage engine support
      • In MySQL, the memory engine supports the hash index, while InnoDB has an adaptive hash function. The hash index is automatically built by the storage engine based on the B+Tree index under specified conditions.

think

Why InnoDB storage engine chooses to use B+Tree index structure

  1. Compared with the binary tree, it has fewer levels and higher search efficiency
  2. For B-Tree, whether it is a leaf node or a non-leaf node, data will be saved, which will reduce the key value stored in a page, and the pointer will decrease accordingly. To save a large amount of data, the height of the tree can only be increased, resulting in performance degradation
  3. Compared with hash index, B+Tree supports range matching and sorting operations

index

An index is a data structure that efficiently retrieves data

Classification

Classification meaning features keywords
primary key index Index created on the primary key of the table Automatically created by default, only one PRIMARY
unique index Avoid duplicate values ​​in a data column in the same table can have multiple UNIQUE
regular index Quickly locate specific data can have multiple -
full text index Searches for keywords in the text rather than comparing indexed values can have multiple FULL TEXT

In the InnoDB engine, according to the storage form of the index, it can be divided into the following two types:

Classification meaning features
clustered index The data storage and the index are put together, and the leaf nodes of the index structure store the row data must have, and only one
secondary index Store the data separately from the index, and the leaf nodes of the index structure are associated with the corresponding primary key There can be multiple

Clustered Index Selection Rules

  • If there is a primary key, the primary key index is a clustered index
  • If no primary key exists, the first UNIQUE index will be used as the clustered index
  • If the table has no primary key, or no suitable unique index, InnoDB will automatically generate a rowid as a hidden clustered index

think

-- 1.以下SQL语句,哪个执行效率高? 为什么?
select * from user where id = 10;
select * from user where name = 'Arm';
-- 备注:id为主键,name字段创建的有索引
/* 第一个高 */

How high is the B+Tree height of the InnoDB primary key index?
insert image description here

Index syntax

  • create index
create [unique|fulltext] index index_name on table_name(index_col_name,...);
  • view index
show index from table_name;
  • delete index
drop index index_name on table_name;
  • the case
-- 1. name字段为姓名字段,该字段值会重复,为该字段创建索引,索引名称为idx_uesr_name,在tb_user表中
create index idx_user_name on tb_user(name);
-- 2. phone手机号字段的值,是非空,且唯一的,为该字段创建唯一索引
create unique index idx_user_phone on tb_user(phone);
-- 3.为profession、age、status创建联合索引
create index idx_user_pro_age_sta on tb_user(profession,age,status);
-- 4.为email建立合适的索引来提升查询效率
create index idx_user_email on tb_user(email);

SQL performance analysis

  • SQL execution frequency

    After the MYSQL client connects successfully, the server status information can be provided through the show [session | global] status command. Through the following command, you can view the access frequency of INSERT, UPDATE, DELETE, and SELECT of the current database:

    show global status like 'Com____';
    

Please add a picture description

  • slow query log

    The slow query log records the logs of all SQL statements whose execution time exceeds the specified parameter (long_query_time, unit: second, default 10 seconds).

    MYSQL slow query logs are not all enabled by default, you need to configure the following information in the MYSQL configuration file (/etc/my.cnf):

    # 开启MYSQL慢查询日志开关
    slow_query_log = 1
    # 设置慢日志的时间为2秒,SQL语句执行超过两秒,就会视为慢查询,记录慢查询日志
    long_query_time = 2
    
  • profile details

show profiles can help us understand where time is spent when doing SQL optimization. Through the have_profiling parameter, you can see whether the current MYSQL supports profile operations:

select @@have_profiling;

Profiling is disabled by default, and profiling can be enabled at the session/global level through the set statement:

set profiling = 1;

Execute a series of business SQL operations, and then view the execution time of the instructions through the following instructions:

-- 查看每一条SQL的耗时基本情况
show profiles;
# 查看指定query_id的SQL语句各个阶段的耗时情况
show profile for query query_id;
# 查看指定query_id的SQL语句CPU的使用情况
show profile cpu for query_id;
  • explain execution plan

    The explain or desc command obtains information about how MYSQL executes the select statement, including how tables are connected and the order in which they are connected during the execution of the select statement

    # 直接在select语句之前加上关键字explain /desc
    explain select 字段列表 from 表名 where 条件;
    

    Explain the meaning of each field in the execution plan

    • id: The sequence number obtained by the select query, indicating the order in which the select statement or table is executed in the query (the same id, the execution order is from top to bottom, and the id is different, the larger the value, the earlier the execution)
    • type: Indicates the connection type. The connection types with good to poor performance are null, system, const, eq_ref, ref, range, index, all
    • possible_key: Displays the indexes that may be applied to this table, one or more
    • key: the index actually used, if it is null, the index is not used
    • key_len: Indicates the number of bytes used in the index. This value is the maximum possible length of the index field, not the actual length used. On the premise of not losing accuracy, the shorter the length, the better.
    • filtered: Indicates the percentage of the number of rows returned to the number of rows to be read, the larger the value of filtered, the better

index use

  • Leftmost prefix rule
    If multiple columns are indexed (joint index), follow the leftmost prefix rule. The leftmost prefix rule means that the query starts from the leftmost column of the index and does not skip columns in the index. If a column is skipped, the index will be partially invalidated.

  • Range query
    In the joint index, a range query (<,>) appears, and the column index on the right side of the range query is invalid

  • Index column operations

    Do not perform operations on indexed columns, otherwise the index will fail

  • String without quotation marks
    When using a string type field without quotation marks, the index will be invalid

  • fuzzy query

    If only the tail fuzzy match, the index will not be invalid. If it is a header fuzzy match, the index is invalid

  • or join condition

    The condition separated by or, if the column in the condition before or has an index, and there is no index in the subsequent column, then the involved index will not be used

  • Data distribution impact
    If MYSQL evaluates that using an index is slower than the full table, then do not use the index

  • SQL prompt

    SQL prompt is an important means of optimizing the database. Simply put, it is to add some artificial prompts to the SQL statement to achieve the purpose of optimizing the operation.

    # use index 建议数据库使用哪个索引
    explain select * from tb_user use index(idx_user_pro) where 条件;
    # ignore index 告诉数据库不用哪个索引
    explain select * from tb_user ignore index(idx_user_pro) where 条件;
    # force index 强制使用哪个索引
    explain select * from tb_user force index(idx_user_pro) where 条件;
    
  • Covering index
    Try to use the covering index (the query uses the index, and the columns that need to be returned can be found in the index and all of them) reduce select *

  • prefix index

    When the field type is a string, sometimes a very long string needs to be indexed, which will make the index larger and waste a lot of disk IO during query, affecting query efficiency. At this time, only part of the prefix of the string can be indexed, which can greatly save the index space and improve the index efficiency

    create index idx_xxxx on table_name(column(n));
    

Index Design Principles

  1. Create indexes for tables with large amounts of data and frequent queries
  2. Create indexes for fields that are often used as query conditions (where), sorting (order by), and grouping (group by) operations
  3. Try to choose a column with a high degree of discrimination as the index, and try to choose to build a unique index. The higher the degree of discrimination, the higher the efficiency of using the index
  4. If it is a field of string type, the length of the field is long, and a prefix index can be established according to the characteristics of the field
  5. Use joint indexes as much as possible to reduce single-column indexes. When querying, joint indexes can cover indexes in many cases, saving storage space, avoiding returning tables, and improving query efficiency
  6. To control the number of indexes, the more indexes the better, the more indexes, the greater the cost of maintaining the index structure, which will affect the efficiency of adding, deleting and modifying
  7. If the index column cannot store null values, please use not null to constrain him when creating the table. When the optimizer knows whether each column contains null values, it can better determine which index is the most efficient to use for queries

SQL optimization

insert data

  • insert optimization

    • batch insert
    insert into tb_test values (1,'tom'),(2,'cat');
    
    • Commit the transaction manually
    start transaction;
    insert into tb_test values (1,'tom'),(2,cat);
    insert into tb_test values (3,'cer'),(4,'pig');
    commit;
    
    • primary key order insertion
    主键乱序插入:8,1,9,50,44
    主键顺序插入:1,2,3,4,5,6
    
  • Insert data in bulk

    If you need to insert a large amount of data at one time, the performance of using the insert statement is low. At this time, you can use the load command of the MYSQL database to insert. The operation is as follows:

    # 客户端连接服务端时,加上参数 --local-infile
    mysql --local-infile -u root -p
    # 设置全局参数local_infile = 1;
    # 执行load指令将准备好的数据,加载到表结构中
    load data local infile '/root/sql.log' into table 'tb_user' files terminated by ',' lines terminated by '\n'
    

primary key optimization

  • data organization

    In the InoDB storage engine, table data is organized and stored according to the order of the primary key. This storage method is called an index-organized table.

  • page split

    A page can be empty or half filled, and a page can be 100% filled. Each page contains 2-N rows of data (if a row of data is too large, the rows will overflow), arranged according to the primary key

  • page merge

    When a row is deleted, the record is not actually deleted physically, only the record is flagged for deletion and its space becomes available for use by other record claims.

    When the deleted records on the page reach MERGE_THRESHOLD (the default is 50% of the page), InnoDB will start looking for the closest page (before or after) to see if the two pages can be merged to optimize space usage

  • Primary Key Design Principles

    • In the case of meeting business needs, try to reduce the length of the primary key
    • When inserting data, try to choose sequential insertion, choose to use AUTO_INCREMENT auto-increment primary key
    • Try not to use UUID as the primary key or other natural primary keys, such as ID number
    • During business operations, avoid modifying the primary key

order by optimization

  1. Create a suitable index based on the sorting field. When sorting by multiple fields, it also follows the leftmost prefix rule
  2. Try to use covering indexes
  3. Multi-field sorting, one ascending and one descending, at this time you need to pay attention to the rules of the joint index when it is created
  4. If filesort is unavoidable, when sorting a large amount of data, you can appropriately increase the sort buffer sort_buffer_size (default)
using index:直接通过索引返回数据,性能高
using filesort:需要将返回的结果在排序缓冲区排序

group by optimization

  1. In grouping operations, you can use indexes to improve efficiency
  2. When grouping operations, the use of indexes also satisfies the leftmost prefix rule

limit optimization

​ Covering index + subquery

count optimization

​ Performance: count (field) < count (primary key id) < count (1) ≈ count (*)

update optimization

Try to update data according to the primary key/index field

view

A view is a virtual table. The data in the view does not actually exist in the database. The row and column data come from the tables used in the query of the custom view, and are dynamically generated when the view is used. In layman's terms, the view only saves the SQL logic of the query, not the query results. So when we create a view, the main work falls on creating this SQL query statement

  • create
create [or replace] view 视图名称[(列名列表)] as select语句 [with [cascaded | local] check option];
-- 例子
create or replace view stu_1 as select id,name from emo where id <10;
  • Inquire
-- 查看创建视图语句:show create view 视图名称;
-- 查看视图数据:select * from 视图名称...;
  • modify view
方式1:create [or replace] view 视图名称[(列名列表)] as select语句 [with[cascaded | local]check option];
方式2:alter view 视图名称[(列明列表)] as select语句 [with[cascaded | local]check option];
  • delete view
drop view 视图名称;
  • View's inspection options

    When a view is created using the with check option clause, MYSQL checks each row that is being changed, such as insert, update, delete, through the view to make it normalize the definition of the view. MYSQL allows creating a view based on another view, and it also checks the rules in the dependent view for consistency. In order to determine the scope of the check, MYSQL provides two options: CASCADED and LOCAL, the default value is CASCADED

  • View update and function

    For a view to be updatable, there must be a one-to-one relationship between the view's rows and the rows in the underlying table. A view is not updatable if it contains any of the following:

    • aggregate functions or window functions (sum(), min(), max(), count(), etc.)
    • DISTINST
    • GROUP BY
    • HAVING
    • UNIQUE or UNION ALL

The role of the view

  1. Simple

    Views not only simplify users' understanding of data, but also their operations. Queries that are frequently used can be defined as views, so that users do not have to specify all the conditions for subsequent operations every time

  2. Safety

    Databases can authorize, but not on specific rows and specific columns of the database. Through views users can only query and modify the data they can see

  3. data independent

    Views can help users shield the impact of real table structure changes.

View case

-- 1. 为了保证数据库表的安全性,开发人员在操作tb_user表时,只能看到用户的基本字段,屏蔽手机号和邮箱两个字段
create view tb_user_view as select id,name,profession,age,gender,status,createtime from tb_user;
select * from tb_user_view

-- 2.查询每个学生所选修的课程(三表联查),这个功能在很多业务中都有使用到,为了简化操作,定义一个视图
select s.name,s.no,c.name from student s student_course sc,couse c where s.id =sc.studentid and sc.courseid = c.id;

create view tb_stu_course_view as select s.name student_name,s.no student_no,c.name course_name from student s student_course sc,couse c where s.id =sc.studentid and sc.courseid = c.id;

select * from tb_stu_course_view;

stored procedure

Introduction: A stored procedure is a collection of SQL statements that have been compiled and stored in the database. Calling a stored procedure can simplify a lot of work for application developers, reduce the transmission between the database and the application server, and improve the efficiency of data processing. beneficial

The idea of ​​stored procedures is very simple, that is, code encapsulation and reuse at the SQL language level of the database.

  • features

    • encapsulation, multiplexing
    • Can accept parameters and return data
    • Reduce network interaction and improve efficiency
  • create

create procedure 存储过程名称([参数列表])
begin
	SQL语句
end;
  • transfer
call 名称([参数]);
  • Check
select * from information_schema.routines where routine_schema = 'xxx'; --查询指定数据库的存储过程及状态信息
show create procedure 存储过程名称; -- 查询某个存储过程的定义
  • delete
drop procedure 存储过程名称;

variable

  • System variable: It is provided by the MYSQL server, not defined by the user, and belongs to the server level. Divided into global variables (global), session variables (session).
-- 查看系统变量
show session variables;
-- 通过like模糊匹配方式查找变量
show session variables like 'auto%';
-- 设置系统变量
set [session|global] 系统变量名 = 值;
set @@[session|global] 系统变量名 = 值;
  • user-defined variable

It is a variable defined by the user according to the needs. The user variable does not need to be declared in advance, and it can be used with '@variable name' when it is used again. Its scope is the current connection

-- 赋值
set @myname = 'itcast';
set @age := 10;
-- 使用
select @myname;
  • local variable

It is a variable that is defined according to the needs and then takes effect locally. Before accessing, a declare statement is required. Can be used as local variables and input parameters in stored procedures. The scope of local variables is the begin...end block declared within it

-- 声明
declare 变量名 变量类型 [default];
# 变量类型就是数据库字段类型:int、char、varchar等
-- 赋值
set 变量名 = 值;
set 变量名 :=值;
select 字段名 into 变量名 from 表名...;

trigger

Introduction: A trigger is a database object related to a table, which triggers and executes a set of SQL statements defined in the trigger before or after insert/update/delete. This feature of the trigger can assist the application to ensure data integrity, log records, data verification and other operations on the database side

Use the aliases old and new to trigger the changed record content in the trigger, which is similar to other databases. Now triggers only support row-level triggers, not statement-level triggers

trigger type new and old
insert type trigger new indicates the data that will be or has been added
update trigger old indicates the data before modification, new indicates the data to be or has been modified
delete type trigger old indicates data that will be or has been deleted
 -- 创建触发器
 create trigger 名称
 before/after insert/update/delete
 on 表名 for each row 
 begin 
 	trigger_stmt
 end;
 
 -- 查看
 show triggers;
 -- 删除
 drop trigger [schema_name]trigger_name;

Lock

A lock is a mechanism by which a computer coordinates concurrent access to a resource by multiple processes or threads. In a database, in addition to the contention of traditional computing resources (cpu, ram, I/O), data is also a resource shared by many users. How to ensure the consistency and validity of concurrent access to data is a problem that all databases must solve, and lock conflicts are also an important factor affecting the performance of concurrent access to databases. From this perspective, locks are particularly important and more complex for databases.

  • Classification
    • Global lock: locks all tables in the database
    • Table-level lock: each operation locks the entire table
    • Row-level lock: Each operation locks the corresponding row data

global lock

​ The global lock is to lock the entire database instance. After locking, the entire instance is in a read-only state. Subsequent DML write statements, DDL statements, and transaction commit statements that have been updated will be blocked.

​ Its typical usage scenario is to do a logical backup of the entire database, and lock all tables to obtain a consistent view and ensure data integrity

-- 加锁
flush tables with read lock;
-- 执行逻辑备份
mysqldump -uroot -p1234 itcast >itcast.sql #windows命令行执行
-- 解锁
unlock tables;
  • features

    Adding a global lock to the database is a relatively heavy operation, and there are the following problems

    1. If it is backed up on the main library, updates cannot be performed during the backup period, and the business basically has to be shut down
    2. If it is backed up on the slave library, the slave library cannot execute the binary log synchronized from the master library during the backup period, which will cause master-slave delay

table lock

Table-level locks lock the entire table for each operation. The locking granularity is large, the probability of sending lock conflicts is the highest, and the concurrency is the lowest. Applied in storage engines such as MyISAM and InnoDB

Table-level locks are mainly divided into the following three categories:

  1. table lock
  2. metadata lock
  3. intent lock
  • table lock

    For table locks, there are two categories:

    • Table shared read lock (read lock)
    • Table exclusive write lock (write lock)
-- 1. 加锁
lock tables 表名... read/write;
-- 2. 释放锁
unlock tables / 客户端断开连接;
  • metadata lock

    The MDL locking process is automatically controlled by the system and does not need to be used explicitly. It will be added automatically when accessing a table. The main function of the MDL lock is to maintain the data consistency of the table metadata. When there are active transactions on the table, the metadata cannot be written. In order to avoid conflicts between DML and DDL, ensure the correctness of reading and writing.

  • intent lock

In order to avoid conflicts between row locks and table locks added during DML execution, intent locks are introduced in InnoDB, so that table locks do not need to check whether each row of data is locked, and use intent locks to reduce table lock checks.

  1. Intent shared lock (IS): Compatible with table lock shared lock (read), mutually exclusive with table lock exclusive lock (write).
  2. Intent exclusive lock (IX): It is mutually exclusive with table lock shared lock (read) and exclusive lock (write). Intention locks do not directly exclude each other

row level lock

Row-level lock, each operation locks the corresponding row data. The locking granularity is the smallest, the probability of lock conflicts is the lowest, and the concurrency is the highest.

  1. Shared lock (S): Allows a transaction to read a row, preventing other transactions from obtaining exclusive locks on the same data set.
  2. Exclusive lock (X): Allow transactions that acquire exclusive locks to update data, and prevent other transactions from obtaining shared and exclusive locks on the same data set.

, the probability of sending lock conflicts is the highest, and the degree of concurrency is the lowest. Applied in storage engines such as MyISAM and InnoDB

Table-level locks are mainly divided into the following three categories:

  1. table lock
  2. metadata lock
  3. intent lock
  • table lock

    For table locks, there are two categories:

    • Table shared read lock (read lock)
    • Table exclusive write lock (write lock)
-- 1. 加锁
lock tables 表名... read/write;
-- 2. 释放锁
unlock tables / 客户端断开连接;
  • metadata lock

    The MDL locking process is automatically controlled by the system and does not need to be used explicitly. It will be added automatically when accessing a table. The main function of the MDL lock is to maintain the data consistency of the table metadata. When there are active transactions on the table, the metadata cannot be written. In order to avoid conflicts between DML and DDL, ensure the correctness of reading and writing.

  • intent lock

In order to avoid conflicts between row locks and table locks added during DML execution, intent locks are introduced in InnoDB, so that table locks do not need to check whether each row of data is locked, and use intent locks to reduce table lock checks.

  1. Intent shared lock (IS): Compatible with table lock shared lock (read), mutually exclusive with table lock exclusive lock (write).
  2. Intent exclusive lock (IX): It is mutually exclusive with table lock shared lock (read) and exclusive lock (write). Intention locks do not directly exclude each other

row level lock

Row-level lock, each operation locks the corresponding row data. The locking granularity is the smallest, the probability of lock conflicts is the lowest, and the concurrency is the highest.

  1. Shared lock (S): Allows a transaction to read a row, preventing other transactions from obtaining exclusive locks on the same data set.
  2. Exclusive lock (X): Allow transactions that acquire exclusive locks to update data, and prevent other transactions from obtaining shared and exclusive locks on the same data set.

Guess you like

Origin blog.csdn.net/m0_51353633/article/details/129759637