Global locks and table locks in MySQL

Various locks in MySQL

According to the scope of locking, the locks in MySQL can be roughly divided into three categories: global locks, table-level locks, and row locks . This section focuses on global locks and table locks. InnoDB-level locks are special. We will open a separate chapter to explain them.

Global lock

As the name implies, a global lock is to lock the entire database instance. MySQL provides a method to add global read lock, the command is Flush tables with read lock (FTWRL). When you need to make the entire library in a read-only state, you can use this command, and then the following statements in other threads will be blocked: data update statements (data addition, deletion, and modification), data definition statements (including table building, table structure modification, etc.) ) And the commit statement of the update transaction.

Use scenarios for global locks

The typical use scenario of the global lock is to make a logical backup of the entire database. That is, select each table in the entire database and save it as a text.

Through FTWRL to ensure that no other threads will update the database, and then back up the entire database. Note that the entire library is completely read-only during the backup process.

But making the entire library read-only sounds very dangerous:

  • If you back up on the main database, you cannot perform updates during the backup period, and the business basically has to be shut down;
  • If you are backing up on the slave database, the slave database cannot execute the binlog synchronized from the master database during the backup, which will cause the master-slave delay. (The main library continues to execute the business, and the slave library is paralyzed, which is specifically used to synchronize the delay caused by the main library information).

The need for global locks

It seems that adding a global lock is not good. But think about it, why is the backup locked? Let's take a look at what is wrong with not locking. (That is to say, the database continues to be used for business execution).

Now initiate a logical backup. Suppose that during the backup period, there is a user who purchases a course, his balance will be deducted from the business logic, and then a course will be added to the purchased course.

What if the time sequence is to backup the account balance table (u_account) first, then the user purchases, and then back up the user course table (u_course)? You can look at this picture:


As you can see, it is assumed that the two duties in the main library are an atomic operation, but this atomicity is destroyed due to the unsynchronization of the backup time. When the data in the u_account table was backed up for the first time, user A had money and did not buy lessons. When it is synchronized to the u_course table, the main library has already bought the course and deducted the money. Therefore, when the u_course table is backed up, the course information will be successfully recorded. But the 200 blocks in the u_account table have been synchronized at this time. So the result of the backup is the same. The user didn't spend a dime, but bye bye got a course.

Of course, when the order of the deduction of the main library and the purchase of courses is reversed, the user may become a victim of spending money but not buying courses.

Why not use MVCC for backup

If you have a deeper understanding of the repeatable read isolation level of MVCC, then you must be wondering why you don't generate a Read View in the current state when you make a backup, so that data consistency can not be guaranteed?

Consistent reading is good, but the premise is that the engine must support this isolation level. For example, for an engine such as MyISAM that does not support transactions, if there is an update during the backup process, only the latest data can always be obtained, which will destroy the consistency of the backup. At this time, we need to use the FTWRL command.

Therefore, the single-transaction method is only applicable to libraries that use transaction engine for all tables (such as tables implemented by InnoDB). If some tables use an engine that does not support transactions, then the backup can only be done through the FTWRL method. This is often one of the reasons why DBAs require business developers to use InnoDB instead of MyISAM.

Business updates are not just adding, deleting and modifying data (DML), but may also be operations such as adding fields and modifying the table structure (DDL). Either way, after a library is globally locked, if you want to add fields to any table in it, it will be locked.

Table lock

There are two types of table-level locks in MySQL: one is table lock and the other is meta data lock (MDL).

Ordinary table lock

The lock syntax is lock tables… read/write . Similar to FTWRL, unlock tables can be used to actively release the lock, or it can be automatically released when the client is disconnected. It should be noted that the lock tables syntax not only restricts the reading and writing of other threads, but also restricts the subsequent operation objects of this thread.

For example, if the lock tables t1 read, t2 write; statement is executed in a thread A, the statements of other threads writing t1 and t2 will be blocked. At the same time, thread A can only perform operations of reading t1 and reading and writing t2 before executing unlock tables. Even writing t1 is not allowed, and naturally you cannot access other tables.

In the absence of more fine-grained locks, table locks are the most common way to handle concurrency. For engines like InnoDB that support row locks, the lock tables command is generally not used to control concurrency. After all, the impact of locking the entire table is still too large.

It is estimated that in the earliest days of MySQL developers, it was absolutely incredible that a table could store hundreds of data. But later found that it is not impossible to store tens of millions of data in a table. Table locks are equivalent to adding row locks to all rows of the entire table.

Therefore, InnoDB developers feel that the granularity of this lock is too large. Therefore, row locks were developed to reduce the granularity of locks and improve concurrency.

MDL (metadata lock)

Another type of table-level lock is MDL (metadata lock). MDL does not need to be used explicitly, it will be automatically added when accessing a table. The role of MDL is to ensure the correctness of reading and writing. You can imagine that if a query is traversing the data in a table, and another thread changes the table structure during execution and deletes a column, then the result of the query thread does not match the table structure, and it must be impossible. of.

Therefore, MDL was introduced in MySQL 5.5. When adding, deleting, modifying, and checking a table, MDL read lock is added; when the structure of the table is to be changed, MDL write lock is added.

How to understand MDL read lock and MDL write lock.

Like the read-write lock in the concurrent thread, the concept of sharing the reading and reading and separating the reading and writing. In other words, any addition, deletion, modification, and checking operations do not affect each other. The modified table structure and other modified table structures, as well as additions, deletions, and changes to the table are mutually exclusive.

Note: This lock is added by default! Must have.

Fair lock mechanism for MySQL transactions

The emergence of the MDL lock mechanism makes me wonder. That is, as long as there is a read-write lock, it will inevitably be accompanied by the problem of transaction starvation. The so-called transaction starvation: When a table itself has an MDL read lock, but new transactions are constantly added to maintain the read lock, isn't my MDL write lock just waiting forever? You must wait for the completion of all additions, deletions, and changes to the table before you can acquire the MDL write lock to modify the table structure.

But in fact, this does not exist. Because the transaction in MySQL is implemented by a fair lock mechanism. That is, all transactions are in order when MDL locks are added. Let's take the following example:


As shown in the figure, when the transaction A executes the query, the t table is locked by the MDL read lock. Transaction B is also an addition, deletion, modification, and check operation, so it will not be blocked. But transaction C needs to add an MDL write lock, so it can only be blocked.

According to my original thinking, if the newly added transaction D is an unfair lock mechanism. Then transaction D will immediately acquire the MDL read lock to execute. However, the actual transaction D will be blocked. This also proves the fairness of MDL locks in MySQL. The lock operations must be executed in the order in which the transactions are generated, so there is no transaction starvation problem that we are worried about.

Why add a field to a small table, causing the entire library to hang up.

You must know that to add fields to a table, or modify fields, or add indexes, you need to scan the data of the entire table. When operating large watches, you will definitely be especially careful to avoid affecting online services. In fact, even if it is a small table, careless operation will cause problems. We continue to use the example in the above figure, assuming that the table t is a small table.

Assuming that transaction A or transaction B is a long transaction, it will cause transaction C to wait for the lock. The most serious consequence is that all the additions, deletions, and changes to the table after transaction C are blocked!

If query statements on a certain table are frequent, and the client has a retry mechanism, that is to say, if a new session is requested after the timeout, the thread of this library will soon be full.

You should know by now that the MDL lock in the transaction is applied at the beginning of the statement execution, but it will not be released immediately after the statement ends, but will be released after the entire transaction is committed.

How to safely add fields to the small table?

First of all, we have to solve the long transaction. If the transaction is not committed, the MDL lock will always be occupied. In the innodb_trx table of the MySQL information_schema library, you can check the currently executing transaction. If you want to make DDL changes to the table that happens to have a long transaction in progress, consider suspending the DDL first, or killing the long transaction.

But consider this scenario. If the table you want to change is a hot table, although the amount of data is not large, but the above requests are frequent, and you have to add a field, what should you do?

Kill may not work at this time, because the new request will come soon. The ideal mechanism is to set the waiting time in the alter table statement. If the MDL write lock can be obtained within the specified waiting time, it is best. If you can't get it, don't block the following business statements. Give up first. Afterwards, the developer retries the command to repeat the process.

MariaDB has incorporated this feature of AliSQL, so these two open source branches currently support the DDL NOWAIT/WAIT n syntax.

ALTER TABLE tbl_name NOWAIT add column ...ALTER TABLE tbl_name WAIT N add column ...

Keep going crazy until the MDL write lock succeeds in grabbing the lock!