MySQL technical insider reading notes five, MySQL lock mechanism

First image

Article Directory

One, MySQL lock mechanism

When developing multi-user, database-driven applications, one of the biggest difficulties is: on the one hand, we must maximize the use of concurrent access to the database, and on the other hand, we must ensure that each user can read and modify data in a consistent manner. For this reason, there is a locking mechanism, which is also a key feature that distinguishes the database system from the file system.

1. What is a lock

Locking is a key feature that distinguishes the database system from the file system. The lock mechanism is used to manage concurrent access to shared resources.

InnoDB storage engine in the row level of data lock on the table, and InnoDB storage engines also use several other places locks within the database, allowing providing concurrent access to a variety of different resources. For example, to operate the LRU list in the buffer pool, delete, add, and move elements in the LRU list, in order to ensure consistency, a lock must be involved. The database system uses locks to support concurrent access to shared resources and to provide data integrity and consistency.

For the MyISAM engine, its lock is a table lock design. There is no problem with reading under concurrent conditions, but the performance during concurrent inserts will be worse. If the insert is at the "bottom", the MyISAM storage engine can still have certain concurrent write operations.

The implementation of InnoDB storage engine locks provides consistent non-locking read and row-level lock support . Row-level locks have no associated additional overhead, and can achieve concurrency and consistency at the same time.

2. Lock and latch

In the database, both lock and latch can be called "locks". But the two have completely different meanings. The main focus of this chapter is lock.

Latches are generally called latches (lightweight locks) because they require a very short lock time . If the duration is long, the performance of the application will be very poor. In the InnoDB storage engine, latches can be divided into mutex (mutual exclusion) and rwlock (read-write lock). Its purpose is to ensure the correctness of concurrent threads operating critical resources, and there is usually no deadlock detection mechanism.

The object of lock is a transaction, and it is used to lock objects in the database , such as tables, pages, and rows. And generally lock objects are released only after transaction commit or rollback (the release time may be different for different transaction isolation levels). In addition, lock, as in most databases, has a deadlock mechanism.


3. Locks in the InnoDB storage engine

3.1 Types of locks

The InnoDB storage engine implements the following two standard row-level locks:

  • Shared lock (S Lock), allows transactions to read a row of data.
  • Exclusive lock (X Lock), allows transactions to delete or update a row of data.


SELECT ... LOCK IN SHARE MODE;  共享锁(Share Lock)	
SELECT ... FOR UPDATE;	排他锁(eXclusive Lock)

If a transaction T1 has already obtained the shared lock of row r, then another transaction T2 can immediately obtain the shared lock of row r, because the read does not change the data of row r. This situation is called Lock Compatible. But if another transaction T3 wants to obtain an exclusive lock on row r, it must wait for transactions T1 and T2 to release the shared lock on row r-this situation is called lock incompatibility.


X locks are not compatible with any locks, while S locks are only compatible with S locks . It is important to note that both S and X locks are row locks, and compatibility refers to the compatibility of locks on the same record (row).

The InnoDB storage engine supports granular locking, which allows transaction locks at the row level and table-level locks to exist at the same time . In order to support locking operations at different granularities, the InnoDB storage engine supports an additional locking method called Intention Lock. Intent lock is to divide the locked object into multiple levels. Intent lock means that the transaction hopes to lock at a finer granularity.


If the locked object is regarded as a tree, then the lowest-level object is locked, that is, the finest-grained object is locked, and then the coarse-grained object needs to be locked first .

If you need to lock the record r on the page with X, you need to lock IX on the database A, table, and page, respectively, and finally lock the record r with X. If any part of it causes a wait, then the operation needs to wait for the completion of the coarse-grained lock.

The InnoDB storage engine supports the design of intent locks which is relatively concise, and its intent locks are table-level locks. The design purpose is mainly to reveal the next lock type that will be requested in a transaction. It supports two intent locks:

Intention shared lock (IS): indicates that the transaction is ready to add a shared lock to the data row, that is to say, a data row must first obtain the IS lock of the table before adding a shared lock

Intentional exclusive lock (IX): Similar to the above, it means that the transaction is ready to add an exclusive lock to the data row, indicating that the transaction must obtain the IX lock of the table before adding an exclusive lock to a data row.

The intention lock is automatically added by InnoDB and does not require user intervention.

For insert, update, and delete, InnoDB will automatically add exclusive locks (X) to the data involved; for general Select statements, InnoDB will not add any locks, transactions can add shared locks or exclusive locks displayed by SQL statements.

Since the InnoDB storage engine supports row-level locks, intent locks do not actually block any requests other than full table scans.


Starting from InnoDB1.0, tables INNODB_TRX, INNODB_LOCKS, INNODB_LOCK_WAITS have been added under the INFORMATION_SCHEMA schema. Through these three tables, users can more easily monitor current transactions and analyze possible lock issues.

mysql>SELECT*FROM information_schema.INNODB_TRX\G;
trx_state:LOCK WAIT
trx_started:2010-01-04 10:49:33
trx_wait_started:2010-01-04 10:49:33
trx_query:select*from parent lock in share mode
trx_started:2010-01-04 10:18:37
2 rows in set(0.00 sec)

Through the column state, it can be observed that the transaction with trx_id of 730FEE is currently running, and the transaction with trx_id of 7311F4 is currently in the "LOCK WAIT" state, and the SQL statement running is select * from parent lock in share mode. This table only shows the currently running InnoDB transactions, and cannot directly determine some of the lock conditions. If you need to view the locks, you also need to access the table INNODB_LOCKS:

mysql>SELECT * FROM information_schema.INNODB_LOCKS\G;
2 rows in set(0.00 sec)

The transaction whose trx_id is 730FEE adds a row lock of X to the table parent, and the transaction whose ID is 7311F4 applies a row lock of S to the table parent. The lock_data is all 1, applying for the same resource, so there will be waiting.

After checking the lock status of each table through the table INNODB_LOCKS, the user can judge the waiting situation caused by this. When the transaction is small, users can make judgments artificially and intuitively. But when the transaction volume is very large, where locks and waits often occur, it is not so easy to judge at this time. But through the table INNODB_LOCK_WAITS, you can intuitively reflect the waiting of the current transaction . The table INNODB_LOCK_WAITS consists of 4 fields:

mysql>SELECT * FROM information_schema.INNODB_LOCK_WAITS\G;
1 row in set(0.00 sec)

Through the above SQL statement, users can clearly and intuitively see which transaction is blocking another transaction. Of course, only the transaction and lock ID are given here. If necessary, users can get more intuitive detailed information according to the tables INNODB_TRX, INNODB_LOCKS, INNODB_LOCK_WAITS.

r.trx_id waiting_trx_id,
r.trx_mysql_thread_id waiting_thread,
r.trx_query waiting_query,
b.trx_id blocking_trx_id,
b.trx_mysql_thread_id blocking_thread,
b.trx_query blocking_query
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b
ON b.trx_id=w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r
ON r.trx_id=w.requesting_trx_id\G;
1 row in set(0.00 sec)

3.2 Consistent non-locking read (important)

Consistent non-locking read (consistent nonlocking read) refers to the InnoDB storage engine on the way over the line multi-version control (MVCC) is used to read the current execution time of data rows in the database. If the read row is performing a DELETE or UPDATE operation, then the read operation will not wait for the release of the row lock . Conversely, the InnoDB storage engine will read a snapshot of the row.


It is called a non-locking read because there is no need to wait for the release of the X lock on the accessed row . The snapshot data refers to the data of the previous version of the row, and the implementation is completed through the undo segment . And undo is used to roll back data in a transaction, so the snapshot data itself has no additional overhead. In addition, reading the snapshot data does not need to be locked, because no transaction needs to modify the historical data.

It can be seen that the non-locking read mechanism greatly improves the concurrency of the database . Under the default settings of the InnoDB storage engine, this is the default read mode, that is, reads do not occupy and wait for the lock on the table.

However, under different transaction isolation levels, the way of reading is different, and non-locking consistent reads are not used under each transaction isolation level. In addition, even if they use non-locking consistent reads, the definition of snapshot data is different. (This is how different isolation levels are achieved)

The snapshot data is actually the historical version before the current row of data, and each row of records may have multiple versions . A row record may have more than one snapshot data. This technique is generally called the row multi-version technique. The resulting concurrency control is called Multi Version Concurrency Control (MVCC).

Under the transaction isolation levels READ COMMITTED and REPEATABLE READ (the default transaction isolation level of the InnoDB storage engine), the InnoDB storage engine uses non-locking consistent reads. However, the definition of snapshot data is different. In the READ COMMITTED transaction isolation level, for snapshot data, non-consistent read always reads the latest snapshot data of the locked row. In the REPEATABLE READ transaction isolation level, for snapshot data, inconsistent read always reads the row data version at the beginning of the transaction.

For example:

#Session A
Query OK,0 rows affected(0.00 sec)
mysql>SELECT * FROM parent WHERE id=1;
1 row in set(0.00 sec)

In session A, a transaction has been opened by explicitly executing the command BEGIN, and the data with id 1 in the table parent has been read (without a shared lock), but the transaction has not ended. At the same time, the user opens another session B, which can simulate concurrency, and then do the following operations on session B:

Query OK,0 rows affected(0.00 sec)
mysql>UPDATE parent SET id=3 WHERE id=1;
Query OK,1 row affected(0.00 sec)
Rows matched:1 Changed:1 Warnings:0

In session B, the record with id 1 in the transaction table parent is modified to id=3, but the transaction is also not committed, so the row with id=1 actually adds an X lock. At this time, if the record with id 1 is read again in session A, according to the characteristics of the InnoDB storage engine, non-locking consistent reads will be used under the transaction isolation levels of READ COMMITTED and REPEATETABLE READ.

Back to the previous session A, and then the last uncommitted transaction, execute the SQL statement SELECT * FROM parent WHERE id=1. At this time, regardless of the transaction isolation level of READ COMMITTED or REPEATABLE READ, the displayed data should be:

mysql>SELECT * FROM parent WHERE id=1;
1 row in set(0.00 sec)

Since the current id=1 data has been modified once, there is only one row version of the record. Then, commit the last transaction in session B:

#Session B
Query OK,0 rows affected(0.01 sec)

After session B commits the transaction, run the SELECT * FROM parent WHERE id=1 SQL statement in session A at this time, and the results obtained under the READ COMMITTED and REPEATABLE transaction isolation levels are different. For the transaction isolation level of READ COMMITTED, it always reads the latest version of the row. If the row is locked, it reads the latest snapshot of the row version (fresh snapshot) . In the above example, because session B has already committed the transaction, the following result will be obtained under the READ COMMITTED transaction isolation level:

mysql>[email protected]@tx_isolation\G;
1 row in set(0.00 sec)
mysql>SELECT * FROM parent WHERE id=1;
Empty set(0.00 sec)

And for REPEATABLE READ transaction isolation level, data is always read row when the transaction began . Therefore, for the REPEATABLE READ transaction isolation level, the results obtained are as follows:

mysql>[email protected]@tx_isolation\G;
1 row in set(0.00 sec)
mysql>SELECT*FROM parent WHERE id=1;
1 row in set(0.00 sec)

It is important to note that for the transaction isolation level of READ COMMITTED, from the perspective of database theory, it violates the characteristic of I in transaction ACID, that is, isolation.


3.3 Consistency lock read

In the default configuration, that is, the transaction isolation level is REPEATABLE READ mode, the SELECT operation of the InnoDB storage engine uses consistent non-locking read . However, in some cases, users need to explicitly lock the database read operation to ensure the consistency of the data logic. This requires the database to support locking statements, even for read-only SELECT operations . The InnoDB storage engine supports two consistent locking read operations for SELECT statements:


SELECT...FOR UPDATE adds an X lock to the rows that are read, and other transactions cannot add any locks to the locked rows. SELECT...LOCK IN SHARE MODE adds an S lock to the read row record. Other transactions can add an S lock to the locked row, but if an X lock is added, it will be blocked.

For consistent non-locking read, even if the read row has been executed SELECT... FOR UPDATE, it can be read.

In addition, SELECT...FOR UPDATE, SELECT...LOCK IN SHARE MODE must be in a transaction. When the transaction is committed, the lock is released . Therefore, when using the above two SELECT locking statements, be sure to add BEGIN, START TRANSACTION or SET AUTOCOMMIT=0.

3.4 Self-growth and lock

Self-growth is a very common attribute in databases, and it is also the primary key method preferred by many DBAs or developers. In the memory structure of the InnoDB storage engine, there is an auto-increment counter for each table that contains an auto-increment value . When inserting into a table containing a self-incrementing counter, the counter will be initialized, and execute the following statement to get the counter value:


The insert operation will add 1 to the self-increasing column based on this self-increasing counter value. This implementation is called AUTO-INC Locking. This kind of lock actually uses a special table locking mechanism. In order to improve the performance of the insert, the lock is not released after a transaction is completed, but is released immediately after the SQL statement of the self-increasing value insertion is completed.

3.5 Foreign keys and locks

Foreign keys are mainly used to check referential integrity constraints. In the InnoDB storage engine, for a foreign key column, if the column is not explicitly indexed, the InnoDB storage engine automatically adds an index to it, because this can avoid table locks.

For the insertion or update of foreign key values, you first need to query the records in the parent table, that is, the SELECT parent table . But for the SELECT operation of the parent table, the consistent non-locking read method is not used , because this will cause data inconsistency, so the SELECT...LOCK IN SHARE MODE method is used at this time , that is, actively add an S lock to the parent table . If the X lock is already added to the parent table at this time, the operation on the child table will be blocked.


The transactions in the two sessions have not performed COMMIT or ROLLBACK operations, and the operation of session B will be blocked. This is because the parent table with id 3 has already added an X lock in session A, and at this time in session B, the user needs to add an S lock to the row with id 3 in the parent table . At this time, the INSERT operation will Is blocked. Imagine that if a consistent non-locking read is used when accessing the parent table, then Session B will read that the parent table has a record with id=3, and the insert operation can be performed . But if session A commits the transaction, there is no record with id 3 in the parent table. There will be inconsistencies between the data in the parent and child tables.

4. Locking algorithm

4.1 Three Algorithms of Row Lock

The InnoDB storage engine has three row lock algorithms, which are:

  • Record Lock: Lock on a single row record (solve non-repeatable reading)
  • Gap Lock: gap lock, lock a range, but does not include the record itself (solve phantom reading)
  • Next-Key Lock: Gap Lock+Record Lock, lock a range, and lock the record itself

Record Lock will always lock the index records. If the InnoDB storage engine table is not set up with any index when it is created, then the InnoDB storage engine will use the implicit primary key to lock it.

Next-Key Lock is a locking algorithm that combines Gap Lock and Record Lock. Under the Next-Key Lock algorithm, InnoDB uses this locking algorithm for row queries.

For example, if an index has four values ​​of 10, 11, 13, and 20, the interval in which the index may be Next-Key Locking is:


If transaction T1 has locked the following range through next-key locking:


When a new record 12 is inserted, the locked range will become:


However, when the queried index contains unique attributes, the InnoDB storage engine optimizes Next-Key Lock and downgrades it to Record Lock , which means that only the index itself is locked, not the scope.


Table t has three values ​​1, 2, and 5. In the above example, X lock is first performed on a=5 in session A. And because a is the primary key and unique, only the value of 5 is locked instead of the range (2,5], so that the value 4 is inserted in session B without blocking, and it can be inserted and returned immediately. That is, the lock is locked by Next -Key Lock algorithm is downgraded to Record Lock, thereby improving application concurrency.

Next-Key Lock is downgraded to Record Lock only when the query column is a unique index. If it is a secondary index, the situation will be completely different.


Column b of table z is an auxiliary index. If the following SQL statement is executed in session A:


Obviously, the SQL statement is queried through the index column b, so it uses the traditional Next-Key Locking technology to lock, and because there are two indexes, it needs to be locked separately . For a clustered index, it only adds Record Lock to the index with column a equal to 5 (the secondary index points to the primary key index). For the auxiliary index, Next-Key Lock is added, and the locked range is (1, 3].

In particular, it should be noted that the InnoDB storage engine will also add gap lock to the next key value of the auxiliary index , that is, there is also a lock with the auxiliary index range of (3, 6). Therefore, if you run the following in the new session B SQL statements will be blocked:

  • The first SQL statement cannot be executed because the SQL statement executed in session A has added X lock to the value of column a=5 in the clustered index, so the execution will be blocked.
  • The second SQL statement, the primary key is inserted into 4, there is no problem, but the inserted secondary index value 2 is in the locked range (1, 3), so the execution will also be blocked.
  • In the third SQL statement, the inserted primary key 6 is not locked, and 5 is not in the range (1, 3). But the inserted value 5 is in another locked range (3, 6), so it also needs to wait.

The following SQL statement will not be blocked and can be executed immediately:


As can be seen from the above example, the role of Gap Lock is to prevent multiple transactions from inserting records into the same range, which will lead to the Phantom Problem (phantom reading). For example, in the above example, the user in session A has locked the record with b=3. If there is no Gap Lock lock (1, 3) at this time, then the user can insert the record whose index b column is 3, which will cause the user in session A to execute the same query again and return a different record, which causes the Phantom Problem. produce.

Finally, we need to remind again that for the lock of unique key values, Next-Key Lock is downgraded to Record Lock and only exists in all unique index columns in the query . If the unique index is composed of multiple columns, and the query is only to find one of the multiple unique index columns, then the query is actually a range type query, not a point type query, so the InnoDB storage engine still uses Next-Key Lock to lock .

4.2 Solve Phantom Problem

Under the default transaction isolation level, that is, REPEATABLE READ, the InnoDB storage engine uses the Next-Key Locking mechanism to avoid the Phantom Problem.

Phantom Problem means that in the same transaction, executing the same SQL statement twice in a row may lead to different results, and the second SQL statement may return rows that did not exist before.

For example, the table t consists of three values ​​of 1, 2, and 5. If transaction T1 executes the following SQL statement at this time:


Note that at this time transaction T1 did not commit the operation, the above should return a result of 5. If at the same time, another transaction T2 inserts the value 4, and the database allows the operation, then transaction T1 executes the above SQL statement again to get results 4 and 5. This is different from the result obtained the first time, and violates the isolation of the transaction, that is, the current transaction can see the results of other transactions.


InnoDB storage engine uses Next-Key Locking algorithm to avoid Phantom Problem. For the above SQL statement SELECT * FROM t WHERE a> 2 FOR UPDATE, it is not the single value of 5 that is locked, but an X lock is added to the range (2, +∞) . Therefore, any insertion into this range is not allowed, thus avoiding the Phantom Problem.

The default transaction isolation level of the InnoDB storage engine is REPEATABLE READ. Under this isolation level, it uses Next-Key Locking to lock.

If the user queries a value through the index and adds an SLock to the row, then even if the queried value is not available, the locked range is still a range . Therefore, if no rows are returned, the newly inserted value must be unique.

5. The lock problem

The isolation requirement of the transaction can be achieved through the locking mechanism, so that the transaction can work concurrently. Locking improves concurrency, but it brings potential problems. Fortunately, because of the requirements of transaction isolation, locks will only cause three problems. If these three situations can be prevented from occurring, then there will be no concurrent exceptions.

5.1 Dirty Read

Dirty pages refer to pages that have been modified in the buffer pool, but have not been flushed to the disk, that is, the data of the pages in the database instance memory and the pages in the disk are inconsistent. Of course, before flushing to the disk, the logs are all Has been written to the redo log file.

The so-called dirty data refers to the modification of the row records in the buffer pool by the transaction, and it has not yet been committed (commit). Dirty data refers to uncommitted data. If dirty data is read, that is, one transaction can read uncommitted data in another transaction, it obviously violates the isolation of the database .


Dirty reads do not often occur in the production environment. From the above example, we can find that the condition for dirty reads to occur is that the isolation level of the transaction needs to be READ UNCOMMITTED, and most of the current databases are set to at least READ COMMITTED. The default transaction isolation level of the InnoDB storage engine is READ REPEATABLE.

5.2 Non-repeatable reading (phantom reading)

Non-repeatable read (modify or delete): If transaction A searches under certain conditions, during the period transaction B deletes a piece of data that meets the conditions, causing transaction A to read one less data.

Phantom read (increase): Transaction A reads data according to certain conditions, during which transaction B inserts new data with the same search conditions. When transaction A reads according to the original conditions again, it finds the newly inserted data of transaction B.

Non-repeatable read refers to reading the same data set multiple times in a transaction . Before the end of this transaction, another transaction also accessed the same data set and performed some DML operations. Therefore, between the two read data in the first transaction, due to the modification of the second transaction, the data read twice in the first transaction may be different. In this way, the data read twice in a transaction is different. This situation is called non-repeatable read (caused by consistent non-locking read) .

The difference between non-repeatable read and dirty read is: dirty read reads uncommitted data, while non-repeatable read reads data that has already been committed, but it violates the requirements of database transaction consistency.


Generally speaking, the problem of non-repeatable reading is acceptable, because what it reads is the data that has already been submitted, which itself does not cause a big problem. Therefore, many database vendors (such as Oracle, Microsoft SQL Server) set the default isolation level of their database transactions to READ COMMITTED, allowing non-repeatable reads under this isolation level.

In the InnoDB storage engine, the problem of non-repeatable reads is avoided by using the Next-Key Lock algorithm . Under the Next-Key Lock algorithm, for index scanning, not only the scanned index is locked, but also the range (gap) covered by these indexes. Therefore, insertions within this range are not allowed. This avoids the problem of non-repeatable read caused by another transaction inserting data in this range.

5.3 Lost update

Loss of updates is another problem caused by locks. In simple terms, it is that the update operation of one transaction will be overwritten by the update operation of another transaction, resulting in data inconsistency.

  • 1) Transaction T1 queries a row of data, puts it into local memory, and displays it to a terminal user User1.
  • 2) Transaction T2 also queries the row of data and displays the obtained data to the terminal user User2.
  • 3) User1 modifies this row of records, updates the database and submits it.
  • 4) User2 modifies this row of records, updates the database and submits it.

Imagine that the bank has lost updates. For example, a user account has 10,000 yuan, and he uses two online banking clients to perform transfer operations respectively. The first transfer of 9,000 RMB, because of the relationship between the network and data, you need to wait at this time. But at this time, the user operates another online banking client and transfers 1 yuan. If the two operations are successful in the end, the user’s account balance is 9,999 yuan, and the first 9,000 yuan transfer has not been updated, but the transfer is in progress. Another account will receive the 9,000 yuan, which leads to more money and unbalanced accounts.

To avoid the occurrence of lost updates, it is necessary to make the operation of the transaction in this case serialized, rather than parallel operation. Add an exclusive X lock to the record read by the user.


6, blocking

Because of the compatibility relationship between different locks, at some moments a lock in one transaction needs to wait for the lock in another transaction to release the resources it occupies, which is blocking. Blocking is not a bad thing, it is to ensure that transactions can run concurrently and normally.

In the InnoDB storage engine, the parameter innodb_lock_wait_timeout is used to control the waiting time (the default is 50 seconds) , and innodb_rollback_on_timeout is used to set whether to rollback the transaction in progress when the waiting timeout (default is OFF, which means no rollback) . The parameter innodb_lock_wait_timeout is dynamic and can be adjusted while the MySQL database is running:

mysql>[email protected]@innodb_lock_wait_timeout=60;
Query OK,0 rows affected(0.00 sec)

It should be kept in mind that the InnoDB storage engine will not roll back the error exception caused by the timeout by default. In fact, the InnoDB storage engine will not roll back exceptions in most cases.

7, deadlock

7.1 The concept of deadlock

Deadlock refers to a phenomenon in which two or more transactions are waiting for each other due to contention for lock resources during the execution process . If there is no external force, the affairs will not be able to proceed. The easiest way to solve the deadlock problem is not to wait, turn any wait into a rollback, and restart the transaction. There is no doubt that this can indeed avoid the deadlock problem. However, in an online environment, this may lead to a decrease in concurrency performance, and even no single transaction can proceed. The problem that this brings is far more serious than the deadlock problem, because it is difficult to detect and wastes resources.

The easiest way to solve the deadlock problem is timeout, that is, when two transactions are waiting for each other, when a waiting time exceeds a certain threshold set, one of the transactions rolls back, and the other waiting transaction can continue. . In the InnoDB storage engine, the parameter innodb_lock_wait_timeout is used to set the timeout period.

Although the timeout mechanism is simple, it is only handled by rolling back the transaction after the timeout, or in other words, it selects the object to be rolled back according to the order of the FIFO. However, if the time-out transaction accounts for a relatively large weight, such as a transaction operation that updates many rows and occupies a lot of undo log, the FIFO method is not suitable at this time, because the time to roll back this transaction is relative to another The office may take a lot of time.

Therefore, in addition to the timeout mechanism, current databases generally use wait-for graph (waiting graph) methods for deadlock detection . Compared with the timeout solution, this is a more active method of deadlock detection. InnoDB storage engine also uses this approach.

7.2 Examples of deadlocks

If the program is serial, then deadlock is unlikely to occur. Deadlock only exists in the case of concurrency, and the database itself is a concurrently running program, so deadlock may occur.


The transaction in session B throws the error message 1213, which means that the transaction is deadlocked . The cause of the deadlock is that the resources of sessions A and B are waiting for each other. Most deadlocks can be detected by the InnoDB storage engine itself and do not require human intervention.

The InnoDB storage engine does not roll back most error exceptions, except for deadlocks . After the deadlock is found, the InnoDB storage engine will roll back a transaction immediately, which needs to be noted. Therefore, if the error 1213 is caught in the application, it does not actually need to be rolled back.