Locks in MySQL (table locks, row locks)

    A lock is a mechanism by which a computer coordinates concurrent access to a resource by multiple processes or pure threads. In a database, in addition to the contention of traditional computing resources (CPU, RAM, I/O), data is also a resource shared by many users. How to ensure the consistency and validity of concurrent access to data is a problem that must be solved in the database where you are located, and lock conflict is also an important factor affecting the performance of concurrent access to the database. From this perspective, locks are particularly important and complex for databases.

 

Overview

    Compared with other databases, MySQL's lock mechanism is relatively simple, and its most notable feature is that different storage engines support different lock mechanisms.

MySQL can be roughly classified into the following three types of locks:

  • Table-level lock: low overhead and fast locking; no deadlock; large locking granularity, the highest probability of lock conflict and the lowest concurrency.
  • Row-level locks: high overhead, slow locking; deadlocks; the smallest locking granularity, the lowest probability of lock conflicts, and the highest concurrency.
  • Page locks: overhead and locking time are between table locks and row locks; deadlocks can occur; locking granularity is between table locks and row locks, and the degree of concurrency is average

 

----------------------------------------------------------------------

 

The lock mode of MySQL table-level lock (MyISAM)

There are two modes of MySQL table-level lock: table shared lock (Table Read Lock) and table exclusive write lock (Table Write Lock).

  • Read operations to MyISAM will not block other users' requests for the same table, but will block write requests to the same table;
  • Write operations to MyISAM will block other users' read and write operations to the same table;
  • MyISAM table reads and writes are serialized, as well as between writes.

When a thread acquires a write lock on a table, only the thread holding the lock can update the table. Read and write operations by other threads will wait until the lock is released.

 

The lock mode of MySQL table-level lock

    MySQL table locks have two modes: table shared read lock (Table Read Lock) and table exclusive write lock (Table Write Lock). The compatibility of lock modes is as follows:

Table lock compatibility in MySQL

Current lock mode/is it compatible/request lock mode

None

read lock

write lock

read lock Yes Yes no
write lock Yes no no

    It can be seen that the read operation of the MyISAM table will not block other users' read requests to the same table, but will block the write requests to the same table; the write operation of the MyISAM table will block other users' read and write to the same table Requests; MyISAM table reads and writes, and writes and writes are serial ! ( When a thread acquires a write lock on a table, only the thread holding the lock can update the table. The read and write operations of other threads will wait until the lock is released. )

 

 

How to add table lock

    Before executing a query statement (SELECT), MyISAM will automatically add read locks to all involved tables, and before executing update operations (UPDATE, DELETE, INSERT, etc.), it will automatically add write locks to the involved tables. This process does not require User intervention, so users generally do not need to explicitly lock MyISAM tables directly with the LOCK TABLE command. In the examples in this book, explicit locking is basically a convenience, not a requirement.

    Displaying locks to MyISAM tables is generally to simulate transaction operations to a certain extent and to achieve consistent reading of multiple tables at a certain point in time. For example, there is an order table orders, which records the total amount of the order, and an order detail table order_detail, which records the subtotal of the amount of each product of the order. Suppose we need to check the total amount of these two tables. If they are equal, you may need to execute the following two SQLs:

1

2

SELECT SUM(total) FROM orders;

SELECT SUM(subtotal) FROM order_detail;

At this time, if the two tables are not locked first, incorrect results may be produced, because the order_detail table may have changed during the execution of the first statement. So the correct way should be:

1

2

3

4

LOCK tables orders read local,order_detail read local;

SELECT SUM(total) FROM orders;

SELECT SUM(subtotal) FROM order_detail;

Unlock tables;

In particular, the following two points should be noted.

  • The above example adds the 'local' option to LOCK TABLES, its role is to allow other users to insert records at the end of the table under the condition that the concurrent insertion conditions of the MyISAM table are met
  • When explicitly adding table locks to a table with LOCKTABLES, all locks involving the table must be acquired at the same time, and MySQL supports lock escalation. That is to say, after executing LOCK TABLES, you can only access those tables that are explicitly locked, and cannot access tables that are not locked; at the same time, if you add a read lock, you can only perform query operations, but not update operations. . In fact, it is basically the same in the case of automatic locking. The MySQL problem obtains all the locks required by the SQL statement at one time. This is also the reason why MyISAM tables do not suffer from deadlock (Deadlock Free)

A session uses the LOCK TABLE command to add a read lock to the table film_text. This session can query the records in the locked table, but an error will be prompted when updating or accessing other tables; at the same time, another session can query the records in the table, but the update will A lock wait occurs.

When using LOCK TABLE, not only need to lock all the tables used at one time, but also, how many times the same table appears in the SQL statement, how many times it needs to be locked by the same alias as in the SQL statement, otherwise it will go wrong!

concurrent lock

    Under certain conditions, MyISAM also supports concurrent queries and operations.

    The MyISAM storage engine has a system variable concurrent_insert, which is specifically used to control its concurrent insert behavior, and its value can be 0, 1, or 2, respectively.

  • When concurrent_insert is set to 0, concurrent inserts are not allowed.
  • When concurrent_insert is set to 1, if MyISAM allows a table read while another process inserts records from the end of the table. This is also the default setting for MySQL.
  • When concurrent_insert is set to 2, regardless of whether there are holes in the MyISAM table, records are allowed to be inserted at the end of the table, and records are allowed to be inserted concurrently at the end of the table.

The concurrent insert feature of the MyISAM storage engine can be used to solve the query and insert lock contention for the same table in the application. For example, setting the concurrent_insert system variable to 2 will always allow concurrent inserts; at the same time, by periodically executing the OPTIONMIZE TABLE statement during the system idle period to defragment the space, receive intermediate holes caused by deleting records.

 

MyISAM lock scheduling

As mentioned earlier, the read and write locks of the MyISAM storage engine are mutually exclusive, and the read operations are serial. So, if a process requests a read lock on a MyISAM table, and another process also requests a write lock on the same table, how does MySQL handle it? The answer is that the writer process acquires the lock first. Not only that, even if the read process first requests the lock waiting queue and the write request arrives later, the write lock will be inserted before the read request! This is because MySQL considers write requests generally more important than read requests. This is why MyISAM tables are not suitable for applications with a large number of update operations and query operations, because a large number of update operations will make it difficult for query operations to obtain read locks, which may block forever. This situation can get really bad sometimes! Fortunately, we can adjust MyISAM's scheduling behavior through some settings.

  • By specifying the startup parameter low-priority-updates, the MyISAM engine defaults to giving priority to read requests.
  • By executing the command SET LOW_PRIORITY_UPDATES=1, the priority of the update request issued by this connection is reduced.
  • Reduce the priority of an INSERT, UPDATE, DELETE statement by specifying the LOW_PRIORITY property of the statement.

Although the above three methods are either update-first or query-first methods, they can still be used to solve the serious problem of read lock waiting in applications where query is relatively important (such as user login system).

In addition, MySQL also provides a compromise method to adjust read and write conflicts, that is, to set an appropriate value for the system parameter max_write_lock_count. When the read lock of a table reaches this value, MySQL temporarily lowers the priority of the write request. , to give the reading process a chance to acquire the lock.

    The write-first scheduling mechanism and workaround have been discussed above. Here's another point to emphasize: some long-running query operations will also "starve" the writing process! Therefore, the application should try to avoid long-running query operations, and do not always try to solve the problem with a SELECT statement. Because this seemingly ingenious SQL statement is often complicated and takes a long time to execute, if possible, the SQL statement can be "decomposed" by using intermediate tables and other measures, so that each step of the query can be executed in a shorter time. time to complete, thereby reducing lock conflicts. If complex queries are unavoidable, you should try to schedule them to be executed during the idle time of the database. For example, some periodic statistics can be scheduled to be executed at night.

 

 

----------------------------------------------------------------------

 

InnoDB lock problem

    There are two biggest differences between InnoDB and MyISAM: one is to support transactions (TRANSACTION); the other is to use row-level locks.

There are many differences between row-level locks and table-level locks. In addition, the introduction of transactions also brings some new problems.

 

1. Transaction and its ACID properties

    A transaction is a logical processing unit consisting of a set of SQL statements, and a transaction has 4 properties, commonly referred to as the ACID property of a transaction.

  • ACTIVITY: A transaction is an atomic unit of operation that either performs all or none of its modifications to the data.
  • Consistent: Data must remain in a consistent state at both the start and completion of a transaction. This means that all relevant data rules must be applied to the modification of the transaction to maintain integrity; at the end of the transaction, all internal data structures (such as B-tree indexes or doubly linked lists) must also be correct.
  • Isolation: The database system provides a certain isolation mechanism to ensure that transactions are executed in an "independent" environment that is not affected by external concurrent operations. This means that the intermediate state in the transaction process is invisible to the outside world, and vice versa.
  • Durable: After the transaction is completed, its modifications to the data are permanent, even in the event of a system failure.

2. Problems caused by concurrent transactions

    Compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources and improve the transaction throughput of the database system, so that it can support more users. However, concurrent transaction processing will also bring some problems, mainly including the following situations.

  • Lost Update: When two or more transactions select the same row and then update that row based on the originally selected value, the lost update problem occurs because each transaction is unaware of the existence of the other - The last update overwrites updates made by other firms. For example, two editors make electronic copies of the same document. Each editor independently changes their copy and then saves the changed copy, thus overwriting the original document. The editor who last saved their changes saves a copy of their changes overwrites the changes made by another editor. This problem can be avoided if another editor cannot access the same file until one editor completes and commits the transaction
  • Dirty Reads: A transaction is modifying a record. Before the transaction is committed, the data of this record is in an inconsistent state; at this time, another transaction also reads the same record. Control, the second transaction reads this "dirty" data and does further processing accordingly, resulting in uncommitted data dependencies. This phenomenon is vividly called "dirty reads".
  • Non -Repeatable Reads: A transaction has read some data has changed, or some records have been deleted! This phenomenon is called "non-repeatable read".
  • Phantom Reads: A transaction re-reads previously retrieved data according to the same query conditions, but finds that other transactions have inserted new data that satisfies its query conditions. This phenomenon is called "phantom reads".

 

3. Transaction isolation level

Among the problems posed by concurrent transactions, "missing updates" should generally be avoided entirely. However, preventing the loss of updates cannot be solved by the database transaction controller alone. It requires the application to add necessary locks to the data to be updated. Therefore, preventing the loss of updates should be the responsibility of the application.

"Dirty read", "non-repeatable read" and "phantom read" are actually database read consistency issues, which must be resolved by the database providing a certain transaction isolation mechanism. There are basically two ways to implement transaction isolation in a database.

One is to lock the data before reading it, preventing other transactions from modifying the data.

The other is to generate a consistent data snapshot (Snapshot) at the point in time of the data request through a certain mechanism without adding any locks, and use this snapshot to provide consistent reading at a certain level (statement level or transaction level). From the user's point of view, it seems that the database can provide multiple versions of the same data. Therefore, this technology is called MultiVersion Concurrency Control (MVCC or MCC), also often referred to as a multiversion database.

    The stricter the transaction isolation level of the database, the smaller the concurrency side effects, but the greater the cost, because transaction isolation is essentially to make transactions "serialize" to a certain extent, which is obviously contradictory to "concurrency" At the same time, different applications have different requirements for read consistency and transaction isolation. For example, many applications are not sensitive to "non-repeatable read" and "phantom read", and may be more concerned about the ability to access data concurrently.

    In order to solve the contradiction between "isolation" and "concurrency", ISO/ANSI SQL92 defines 4 transaction isolation levels. Each level has a different degree of isolation and allows different side effects. Applications can choose different levels according to their own business logic requirements. The isolation level to balance the contradiction between "isolation" and "concurrency"

Comparison of four isolation levels for transactions

Isolation Level/Read Data Consistency and Allowed Concurrent Side Effects read data consistency dirty read non-repeatable read hallucinations

Read uncommitted

The lowest level, only guaranteed not to read physically corrupted data Yes Yes Yes
Read committed statement level no Yes Yes
Repeatable read transaction level no no Yes
Serializable highest level, transaction level no no no

    The last thing to note is that each specific database does not necessarily fully implement the above four isolation levels. For example, Oracle only provides two standard levels, Read committed and Serializable, and also defines the Read only isolation level itself: SQL Server supports the above-mentioned isolation levels. In addition to the four levels defined by ISO/ANSI SQL92, an isolation level called "snapshot" is also supported, but strictly speaking, it is a Serializable isolation level implemented with MVCC. MySQL supports all four isolation levels, but in the specific implementation, there are some features, such as MVCC consistent read in some isolation levels, but not in some cases.

 

 

Get InonoD row lock contention

Row lock contention on the system can be analyzed by examining the InnoDB_row_lock status variable:

1

2

3

4

5

6

7

8

9

10

11

mysql> show status like 'innodb_row_lock%';

+-------------------------------+-------+

| Variable_name | Value |

+-------------------------------+-------+

| Innodb_row_lock_current_waits | 0 |

| Innodb_row_lock_time | 0 |

| Innodb_row_lock_time_avg | 0 |

| Innodb_row_lock_time_max | 0 |

| Innodb_row_lock_waits | 0 |

+-------------------------------+-------+

rows in set (0.00 sec)

    If it is found that the contention is serious, for example, the values ​​of Innodb_row_lock_waits and Innodb_row_lock_time_avg are relatively high, you can also set up InnoDB Monitors to further observe the tables, data rows, etc. where lock conflicts occur, and analyze the cause of the lock contention.

    

    

InnoDB的行锁模式及加锁方法

InnoDB实现了以下两种类型的行锁。

  • 共享锁(s):允许一个事务去读一行,阻止其他事务获得相同数据集的排他锁。
  • 排他锁(X):允许获取排他锁的事务更新数据,阻止其他事务取得相同的数据集共享读锁和排他写锁。

另外,为了允许行锁和表锁共存,实现多粒度锁机制,InnoDB还有两种内部使用的意向锁(Intention Locks),这两种意向锁都是表锁。

意向共享锁(IS):事务打算给数据行共享锁,事务在给一个数据行加共享锁前必须先取得该表的IS锁。

意向排他锁(IX):事务打算给数据行加排他锁,事务在给一个数据行加排他锁前必须先取得该表的IX锁。

InnoDB行锁模式兼容性列表

当前锁模式/是否兼容/请求锁模式 X IX S IS
X 冲突 冲突 冲突 冲突
IX 冲突 兼容 冲突 兼容
S 冲突 冲突 兼容 兼容
IS 冲突 兼容 兼容 兼容

 

    如果一个事务请求的锁模式与当前的锁兼容,InnoDB就请求的锁授予该事务;反之,如果两者两者不兼容,该事务就要等待锁释放。

    意向锁是InnoDB自动加的,不需用户干预。对于UPDATE、DELETE和INSERT语句,InnoDB会自动给涉及及数据集加排他锁(X);对于普通SELECT语句,InnoDB会自动给涉及数据集加排他锁(X);对于普通SELECT语句,InnoDB不会任何锁;事务可以通过以下语句显示给记录集加共享锁或排锁。

共享锁(S):SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE

排他锁(X):SELECT * FROM table_name WHERE ... FOR UPDATE

    用SELECT .. IN SHARE MODE获得共享锁,主要用在需要数据依存关系时确认某行记录是否存在,并确保没有人对这个记录进行UPDATE或者DELETE操作。但是如果当前事务也需要对该记录进行更新操作,则很有可能造成死锁,对于锁定行记录后需要进行更新操作的应用,应该使用SELECT ... FOR UPDATE方式获取排他锁。

    

 

InnoDB行锁实现方式

    InnoDB行锁是通过索引上的索引项来实现的,这一点MySQL与Oracle不同,后者是通过在数据中对相应数据行加锁来实现的。InnoDB这种行锁实现特点意味者:只有通过索引条件检索数据,InnoDB才会使用行级锁,否则,InnoDB将使用表锁!

    在实际应用中,要特别注意InnoDB行锁的这一特性,不然的话,可能导致大量的锁冲突,从而影响并发性能。

    

 

间隙锁(Next-Key锁)

    当我们用范围条件而不是相等条件检索数据,并请求共享或排他锁时,InnoDB会给符合条件的已有数据的索引项加锁;对于键值在条件范围内但并不存在的记录,叫做“间隙(GAP)”,InnoDB也会对这个“间隙”加锁,这种锁机制不是所谓的间隙锁(Next-Key锁)。

    举例来说,假如emp表中只有101条记录,其empid的值分别是1,2,...,100,101,下面的SQL:

SELECT * FROM emp WHERE empid > 100 FOR UPDATE

    是一个范围条件的检索,InnoDB不仅会对符合条件的empid值为101的记录加锁,也会对empid大于101(这些记录并不存在)的“间隙”加锁。

    InnoDB使用间隙锁的目的,一方面是为了防止幻读,以满足相关隔离级别的要求,对于上面的例子,要是不使用间隙锁,如果其他事务插入了empid大于100的任何记录,那么本事务如果再次执行上述语句,就会发生幻读;另一方面,是为了满足其恢复和复制的需要。有关其恢复和复制对机制的影响,以及不同隔离级别下InnoDB使用间隙锁的情况。

    很显然,在使用范围条件检索并锁定记录时,InnoDB这种加锁机制会阻塞符合条件范围内键值的并发插入,这往往会造成严重的锁等待。因此,在实际开发中,尤其是并发插入比较多的应用,我们要尽量优化业务逻辑,尽量使用相等条件来访问更新数据,避免使用范围条件。

什么时候使用表锁

    对于InnoDB表,在绝大部分情况下都应该使用行级锁,因为事务和行锁往往是我们之所以选择InnoDB表的理由。但在个另特殊事务中,也可以考虑使用表级锁。

  • 第一种情况是:事务需要更新大部分或全部数据,表又比较大,如果使用默认的行锁,不仅这个事务执行效率低,而且可能造成其他事务长时间锁等待和锁冲突,这种情况下可以考虑使用表锁来提高该事务的执行速度。
  • 第二种情况是:事务涉及多个表,比较复杂,很可能引起死锁,造成大量事务回滚。这种情况也可以考虑一次性锁定事务涉及的表,从而避免死锁、减少数据库因事务回滚带来的开销。

    当然,应用中这两种事务不能太多,否则,就应该考虑使用MyISAM表。

    在InnoDB下 ,使用表锁要注意以下两点。

    (1)使用LOCK TALBES虽然可以给InnoDB加表级锁,但必须说明的是,表锁不是由InnoDB存储引擎层管理的,而是由其上一层MySQL Server负责的,仅当autocommit=0、innodb_table_lock=1(默认设置)时,InnoDB层才能知道MySQL加的表锁,MySQL Server才能感知InnoDB加的行锁,这种情况下,InnoDB才能自动识别涉及表级锁的死锁;否则,InnoDB将无法自动检测并处理这种死锁。

    (2)在用LOCAK TABLES对InnoDB锁时要注意,要将AUTOCOMMIT设为0,否则MySQL不会给表加锁;事务结束前,不要用UNLOCAK TABLES释放表锁,因为UNLOCK TABLES会隐含地提交事务;COMMIT或ROLLBACK产不能释放用LOCAK TABLES加的表级锁,必须用UNLOCK TABLES释放表锁,正确的方式见如下语句。

    例如,如果需要写表t1并从表t读,可以按如下做:

1

2

3

4

5

SET AUTOCOMMIT=0;

LOCAK TABLES t1 WRITE, t2 READ, ...;

[do something with tables t1 and here];

COMMIT;

UNLOCK TABLES;

 

关于死锁

    MyISAM表锁是deadlock free的,这是因为MyISAM总是一次性获得所需的全部锁,要么全部满足,要么等待,因此不会出现死锁。但是在InnoDB中,除单个SQL组成的事务外,锁是逐步获得的,这就决定了InnoDB发生死锁是可能的。

    发生死锁后,InnoDB一般都能自动检测到,并使一个事务释放锁并退回,另一个事务获得锁,继续完成事务。但在涉及外部锁,或涉及锁的情况下,InnoDB并不能完全自动检测到死锁,这需要通过设置锁等待超时参数innodb_lock_wait_timeout来解决。需要说明的是,这个参数并不是只用来解决死锁问题,在并发访问比较高的情况下,如果大量事务因无法立即获取所需的锁而挂起,会占用大量计算机资源,造成严重性能问题,甚至拖垮数据库。我们通过设置合适的锁等待超时阈值,可以避免这种情况发生。

    通常来说,死锁都是应用设计的问题,通过调整业务流程、数据库对象设计、事务大小、以及访问数据库的SQL语句,绝大部分都可以避免。下面就通过实例来介绍几种死锁的常用方法。

    (1)在应用中,如果不同的程序会并发存取多个表,应尽量约定以相同的顺序为访问表,这样可以大大降低产生死锁的机会。如果两个session访问两个表的顺序不同,发生死锁的机会就非常高!但如果以相同的顺序来访问,死锁就可能避免。

    (2)在程序以批量方式处理数据的时候,如果事先对数据排序,保证每个线程按固定的顺序来处理记录,也可以大大降低死锁的可能。

    (3)在事务中,如果要更新记录,应该直接申请足够级别的锁,即排他锁,而不应该先申请共享锁,更新时再申请排他锁,甚至死锁。

    (4)在REPEATEABLE-READ隔离级别下,如果两个线程同时对相同条件记录用SELECT...ROR UPDATE加排他锁,在没有符合该记录情况下,两个线程都会加锁成功。程序发现记录尚不存在,就试图插入一条新记录,如果两个线程都这么做,就会出现死锁。这种情况下,将隔离级别改成READ COMMITTED,就可以避免问题。

    (5)当隔离级别为READ COMMITED时,如果两个线程都先执行SELECT...FOR UPDATE,判断是否存在符合条件的记录,如果没有,就插入记录。此时,只有一个线程能插入成功,另一个线程会出现锁等待,当第1个线程提交后,第2个线程会因主键重出错,但虽然这个线程出错了,却会获得一个排他锁!这时如果有第3个线程又来申请排他锁,也会出现死锁。对于这种情况,可以直接做插入操作,然后再捕获主键重异常,或者在遇到主键重错误时,总是执行ROLLBACK释放获得的排他锁。

 

    尽管通过上面的设计和优化等措施,可以大减少死锁,但死锁很难完全避免。因此,在程序设计中总是捕获并处理死锁异常是一个很好的编程习惯。

    如果出现死锁,可以用SHOW INNODB STATUS命令来确定最后一个死锁产生的原因和改进措施。

 

 

--------------------------------------------------------------------------------

 

总结

    对于MyISAM的表锁,主要有以下几点

    (1)共享读锁(S)之间是兼容的,但共享读锁(S)和排他写锁(X)之间,以及排他写锁之间(X)是互斥的,也就是说读和写是串行的。

    (2)在一定条件下,MyISAM允许查询和插入并发执行,我们可以利用这一点来解决应用中对同一表和插入的锁争用问题。

    (3)MyISAM默认的锁调度机制是写优先,这并不一定适合所有应用,用户可以通过设置LOW_PRIPORITY_UPDATES参数,或在INSERT、UPDATE、DELETE语句中指定LOW_PRIORITY选项来调节读写锁的争用。

    (4)由于表锁的锁定粒度大,读写之间又是串行的,因此,如果更新操作较多,MyISAM表可能会出现严重的锁等待,可以考虑采用InnoDB表来减少锁冲突。

 

    对于InnoDB表,主要有以下几点

    (1)InnoDB的行销是基于索引实现的,如果不通过索引访问数据,InnoDB会使用表锁。

    (2)InnoDB间隙锁机制,以及InnoDB使用间隙锁的原因。

    (3)在不同的隔离级别下,InnoDB的锁机制和一致性读策略不同。

    (4)MySQL的恢复和复制对InnoDB锁机制和一致性读策略也有较大影响。

    (5)锁冲突甚至死锁很难完全避免。

    在了解InnoDB的锁特性后,用户可以通过设计和SQL调整等措施减少锁冲突和死锁,包括:

  • 尽量使用较低的隔离级别
  • 精心设计索引,并尽量使用索引访问数据,使加锁更精确,从而减少锁冲突的机会。
  • 选择合理的事务大小,小事务发生锁冲突的几率也更小。
  • 给记录集显示加锁时,最好一次性请求足够级别的锁。比如要修改数据的话,最好直接申请排他锁,而不是先申请共享锁,修改时再请求排他锁,这样容易产生死锁。
  • 不同的程序访问一组表时,应尽量约定以相同的顺序访问各表,对一个表而言,尽可能以固定的顺序存取表中的行。这样可以大减少死锁的机会。
  • 尽量用相等条件访问数据,这样可以避免间隙锁对并发插入的影响。
  • 不要申请超过实际需要的锁级别;除非必须,查询时不要显示加锁。
  • 对于一些特定的事务,可以使用表锁来提高处理速度或减少死锁的可能。

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325404328&siteId=291194637