MySQL in-depth study --- (5) For unique index and ordinary index, how to choose?

The opening question : In different business scenarios, should I choose a common index or a unique index?

Suppose you are maintaining a citizen system. Everyone has a unique ID number, and the business code has guaranteed that two duplicate ID numbers will not be written. If the citizen system needs to check the name according to the ID number, it will execute a SQL statement like this:
select name from CUser where id_card = 'xxxxxxxyyyyyyzzzzz';
  • Due to the relatively large ID number field, I do not recommend that you use the ID number as the primary key. Now you have two choices, either create a unique index for the id_card field, or create an ordinary index.

Introduce the topic : fromPerformance perspectiveConsider, do you choose a unique index or an ordinary index? What is the basis for selection?

  • Next, analyze the impact of these two indexes on the performance of query statements and update statements:

(1) Query process

Insert picture here, insert picture description here
For the query statement select id from T where k=5. First, start from the root of the tree through the B+ tree, and search for the leaf node by layer, which is the data page in the lower right corner of the figure, and then it can be considered that the data page is located inside the data page through the dichotomy.
  • For ordinary indexes , after finding the first record (5,500) that meets the condition, you need to find the next record until the first record that does not meet the k=5 condition is encountered.
  • For a unique index , because the index defines uniqueness, after finding the first record that meets the condition, the search will stop continuing.
the difference:
The equivalent query of the ordinary index will continue to traverse to the first unequal value before it ends, while the equivalent query of the unique index will end with a hit (the performance gap is minimal)
InnoDB engine data is read and written in units of data pages . In other words, when a record needs to be read, the record itself is not read from the disk, but the whole is read into the memory in units of pages. In InnoDB, the size of each data page is 16KB by default.

(2) Update process

1) Introduction to change buffer:
  • When a data page needs to be updated, if the data page is in the memory, it is updated directly, and if the data page is not in the memory, InnoDB will cache these update operations in the change buffer without affecting the data consistency. In this way, there is no need to read this data page from the disk. When the next query needs to access this data page, the data page is read into the memory, and then the operations related to this page in the change buffer are executed. In this way, the correctness of the data logic can be guaranteed.
  • Obviously, if the update operation can be recorded in the change buffer first, and disk reads are reduced , the execution speed of the statement will be significantly improved. Moreover, the data read into the memory needs to occupy the buffer pool, so this method can also avoid occupying the memory and improve the memory utilization . Because the chage buffer itself only records the "update process", it is much smaller than the data page (a 16k).
  • Reading data from disk into memory involves random IO access and is one of the most expensive operations in the database. Because change buffer reduces random disk access, the improvement of update performance is obvious;For ordinary indexes, the change buffer is only significantly improved during write operations (when the data page has not been loaded into the memory), and the update of the unique index cannot be used.
2) Usage scenarios of change buffer:
  • Through the above analysis, we know the acceleration effect of the change buffer on the update process, and we also know that the change buffer is only used in the scenario of a common index, and is not suitable for a unique index.
So, will the general index play an acceleration role for all scenes?
  • forWrite more and read lessIn terms of business, the probability of the page being accessed immediately after writing is relatively small, and the use of change buffer is the best at this time. Commonly used in this business model are billing and log systems.
  • The update model for a business isQuery will be done immediately after writingThen even if the conditions are met, the update will be recorded in the change buffer first, but since the data page will be accessed soon, the merge process will be triggered immediately. In this way, the number of random access to IO will not be reduced, but will increase the maintenance cost of the change buffer. Therefore, for this business model, the change buffer has a side effect.

(3) Insertion operation

  • Assuming that a new record (4,400) is inserted into the table, what is the processing flow of InnoDB?
1) The first case: the target page for this record to be updatedIn memoryAt this time, InnoDB's processing flow is as follows:
  • For the unique index , find the position between 3 and 5, judge that there is no conflict , insert this value, and the statement execution ends;
  • For ordinary indexes , find the position between 3 and 5, insert this value, and the statement execution ends.
2) The second case: the target page of this record to be updatedNot in memory. At this time, InnoDB's processing flow is as follows:
  • For the unique index , the data page needs to be read into the memory, it is judged that there is no conflict, the value is inserted, and the statement execution ends;
  • For ordinary indexes , the update is recorded in the change buffer, and the statement execution ends.

(4) Choice of index

  • In fact, there is no difference in query capabilities between these two types of indexes , and the main consideration is the impact on update performance . Therefore, we try to choose ordinary index.
  • However, if all updates are immediately followed by queries for this record, then you should close the change buffer. In other cases, change buffer can improve update performance.

(5) The difference between change buffer and redo log

1) What is WAL technology:
The full name of WAL is Write-Ahead Logging, and its key point isWrite the log first, then write to the disk, That is, write the fan board first, and then write the account book when you are not busy.
2) Analyze change buffer and redo log for the same process
  • Suppose, now to execute the following insert statement on the table :
insert into t(id,k) values(id1,k1),(id2,k2);
  • Here, we assume the current state of the k index tree. After finding the position, the data page where k1 is located is in the memory (InnoDB buffer pool), and the data page where k2 is located is not in the memory. Shown in Figure 2 is the update state diagram with change buffer.
Figure 2 Update process with change buffer

Figure 2 Update process with change buffer

Four parts are involved here:RAM(InnoDB buffer pool),redo log(Ib_log_fileX), Data table space(T.ibd),System tablespace(Ibdata1).
Data table space : one by one table data files, the corresponding disk file is "table name.ibd";
system table space : used to store system information, such as data dictionary, etc., the corresponding disk file is "ibdata1"
Analysis process (according to the numerical order in the figure):
  1. Page 1 In the memory, update the memory directly;
  2. Page 2 is not in the memory, just in the change buffer area of ​​the memory, record the message "I want to insert a line into Page 2"
  3. Record the above two actions in the redo log (3 and 4 in the figure)
  • After performing the above three steps, the transaction can be completed. Therefore, you will see that the cost of executing this update statement is very low, that is, two memory locations are written, and then a disk is written (two operations are written together to write a disk), and it is written sequentially.
For the nextRead request, Then how to deal with it?

For example, we are now going to execute select * from t where k in (k1, k2)

If the read statement occurs shortly after the update statement and the data in the memory is still there, the two read operations at this time have nothing to do with the system table space (ibdata1) and redo log (ib_log_fileX). as the picture shows:

Figure 3 Reading process with change buffer

Figure 3 Reading process with change buffer

Analysis process (according to the numerical order in the figure):
  1. When reading Page 1, directly return from memory. Here is not to read data from the disk, nor does it have to be returned after updating the data from the redo log. This picture illustrates this problem. Although the previous data is still on the disk, it is directly from the memory. The result returned in the result is also correct.
  2. When you want to read Page 2, you need to read Page 2 from the disk into the memory, and then apply the operation log in the change buffer to generate a correct version and return the result. As you can see, this data page will be read into the memory only when you know that you need to read Page 2.
3) Difference
  • The main saving of redo log is randomWrite to diskIO consumption (converted to sequential writing), while the main saving of change buffer is randomRead diskIO consumption.
  • Redo log is a redo log to ensure transaction durability. The change buffer is a B+ index tree to optimize the global index update process.

(6) Thinking questions

  • 1) To actually update the data to disk, is it done by change buffer or by redo log?
  • The data is updated to the disk, which is a lot of these two, and the memory is directly written to the disk.
  • 2) What exactly is a buffer pool?
  • As the name implies, hard disks have an order of magnitude difference in read and write speeds compared to memory. If the corresponding data pages must be loaded from the disk for each read and write, the efficiency of DB will not be improved. Therefore, in order to resolve this dilemma, almost all DBs will The buffer pool is the standard configuration (a whole block of space opened up in the memory, and the engine uses some hit and elimination algorithms to maintain and manage), and the change buffer takes it a step further. Updates in the memory can immediately return the execution results and Records that meet the consistency constraints (explicitly or implicitly defined constraints) are also temporarily placed in the buffer pool, which greatly reduces the probability of disk IO operations
  • 3) From Figure 2, you can see that the change buffer is written to the memory at the beginning, so if the machine is powered off and restarted at this time, will the change buffer be lost? Loss of the change buffer is not a trivial matter. If data is read from the disk, there is no merge process, which is equivalent to data loss. Will this happen?
  • It will not affect the database; it will cause the data loss of the unfinished operation this time, but it will not cause the data loss of the completed operation.
  1. There are two parts in the change buffer, one part is not written this time, and the other part is already written.
  2. For the unfinished, this part of the operation has not been written to the redo log, so the transaction has not yet been committed, so it has no effect.
  3. For those that have been written, they can be restored through redo log.
  • The change buffer needs to be written into the system table space to ensure data consistency. The redo log needs to be written when the change buffer is modified, and the change buffer needs to be restored according to the redo log when the change buffer is being restored. If the change buffer is not written to the system table space , That is, if no persistence is performed, then after the change buffer is written to the memory, if the power fails (that is, the problem raised at the end of the article), data recovery cannot be performed. This will also cause the data in the index to be inconsistent with the data in the corresponding column of the corresponding table.