MySQL technical insider reading notes four, MySQL index and algorithm

First image

Article Directory

1. Index

Indexing is an important aspect of application design and development. If there are too many indexes, the performance of the application may be affected. Too few indexes will have an impact on query performance. To find a suitable balance point, this is critical to the performance of the application.

1. InnoDB storage engine index overview

The InnoDB storage engine supports the following common indexes:

  • B+ tree index
  • Full-text index
  • Hash index

The hash index supported by the InnoDB storage engine is adaptive . The InnoDB storage engine will automatically generate a hash index for the table according to the usage of the table . It is not possible to manually intervene whether to generate a hash index in a table.

The B+ tree index is the index in the traditional sense, which is the most commonly used and most effective index for searching in the current relational database system. The structure of a B+ tree index is similar to a binary tree, and the data can be quickly found based on the key value.

The B+ tree index cannot find a specific row with a given key value, but the B+ tree index can find only the page where the data row is searched . Then the database reads the page into the memory, and then searches in the memory, and finally gets the data to be searched.

2. Data structure and algorithm

2.1 Binary search method

Binary search is also called binary search. It is used to find a record in an ordered array of records. The basic idea is to arrange the records in order (increasing or decreasing). In the process, a jump search method is adopted , that is, the midpoint position of the ordered sequence is the comparison object. If the value of the element to be found is less than the midpoint element, the sequence to be searched is reduced to the left half, otherwise it is the right half. . Through a comparison, the search interval is reduced by half.

If there are 10 numbers of 5, 10, 19, 21, 31, 37, 42, 48, 50, 52, now we need to find the record 48 from these 10 numbers:


2.2 Binary search tree and balanced binary tree

The B+ tree is a binary search tree, then a balanced binary tree, and the B tree evolved.

Binary search tree:


In a binary search tree, the key value of the left subtree is always less than the key value of the root, and the key value of the right subtree is always greater than the key value of the root . Therefore, the sorted output of the key values ​​can be obtained through the middle-order traversal, and the output after the middle-order traversal: 2, 3, 5, 6, 7, 8.

The binary search tree can be constructed arbitrarily, and the same five numbers 2, 3, 5, 6, 7, and 8, can also be constructed as follows:


The average search times is (1+2+3+4+5+5)/6=3.16 times, which is similar to sequential search. Obviously, the query efficiency of this binary search tree is low. Therefore, if you want to construct a binary search tree with maximum performance, the binary search tree needs to be balanced , which leads to a new definition—balanced binary tree, or AVL tree.

The definition of a balanced binary tree is as follows: first it conforms to the definition of a binary search tree, and secondly it must satisfy that the maximum difference between the heights of the two subtrees of any node is 1.

The search performance of the balanced binary tree is relatively high, but not the highest, but close to the highest performance. The best performance requires the establishment of an optimal binary tree , but the establishment and maintenance of the optimal binary tree requires a lot of operations. Therefore, users generally only need to build a balanced binary tree.

The query speed of a balanced binary tree is indeed very fast, but the cost of maintaining a balanced binary tree is very high. Generally speaking, one or more left and right rotations are needed to get the balance of the tree after insertion or update.


3. B+ tree

B+ tree is a balanced search tree designed for disk or other direct access auxiliary equipment. In the B+ tree, all record nodes are stored on the leaf nodes of the same layer in the order of the size of the key value, and are connected by the leaf node pointers.


All records are on the leaf nodes and are stored in order. If the user traverses sequentially from the leftmost leaf node, the order of all key values ​​can be obtained: 5, 10, 15, 20, 25, 30, 50, 55 , 60, 65, 75, 80, 85, 90.

3.1 Insert operation of B+ tree

The insertion of the B+ tree must ensure that the records in the leaf nodes are still sorted after the insertion, and at the same time, three situations of inserting into the B+ tree need to be considered. Each situation may lead to a different insertion algorithm.


1. Take the above B+ tree as an example. If the user inserts the key value of 28 and finds that the current Leaf Page and Index Page are not full, just insert it directly


2. Then insert the key value of 70. At this time, the original Leaf Page is full, but the Index Page is not full, which is in line with the second situation in the table. At this time, the situation after inserting the Leaf Page is 55, 55, and 60. , 65, 70, and split the leaf nodes according to the middle value 60.


3. Finally, the key value 95 is inserted. At this time, it is in line with the third situation discussed in the table, that is, the Leaf Page and Index Page are full. At this time, two splits are required.

Because of the relationship shown in the picture, it is impossible to add a doubly linked list pointer to each leaf node, but it still exists.

It can be seen that no matter how it changes, the B+ tree will always maintain balance. However, in order to maintain balance, a large number of split pages (split) operations may be required for newly inserted key values . Because the B+ tree structure is mainly used for disks, page splitting means disk operations, so page splitting operations should be minimized when possible. Therefore, the B+ tree also provides a Rotation function similar to a balanced binary tree.

The rotation occurs when the Leaf Page is full, but its left and right sibling nodes are not full. At this time, the B+ tree is not eager to split the page, but moves the record to the sibling node of the page. Under normal circumstances, the left sibling will be checked first for rotation operations.


Take the above figure as an example, if the key value 70 is inserted, the B+ tree will not rush to split the leaf nodes, but will do the rotation operation.


The rotation operation reduces the page splitting operation of the B+ tree, and the height of the B+ tree is still 2.

3.2 Delete operation of B+ tree

The B+ tree uses a fill factor to control the deletion and change of the tree, and 50% is the minimum value that the fill factor can be set. The delete operation of the B+ tree must also ensure that the records in the leaf nodes are still sorted after the deletion. Same as the insertion, the deletion operation of the B+ tree also needs to consider the three situations in the following table. The difference is that the deletion is based on the change of the fill factor. to measure.


4. B+ tree index

The essence of B+ tree index is the realization of B+ tree in the database. However, the B+ index in the database has a characteristic of high fan-out , so in the database, the height of the B+ tree is generally in the 2 to 4 levels, which means that it only takes 2 to at most to find the row record of a certain key value. 4 IOs, which is not bad. Because the current general mechanical disk can do at least 100 times of IO per second, 2 to 4 times of IO means that the query time only needs to be 0.02 to 0.04 seconds.

The B+ tree index in the database can be divided into clustered index (clustered inex) and auxiliary index (secondary index) , but whether it is a clustered or auxiliary index, its internal is B+ tree, that is, highly balanced, the leaf nodes store all The data. The difference between a clustered index and an auxiliary index (non-clustered index) is whether the leaf node stores a whole row of information.

4.1 Clustered Index

The InnoDB storage engine table is an index-organized table, that is, the data in the table is stored in the order of the primary key . The clustered index (clustered index) constructs a B+ tree according to the primary key of each table . At the same time, the leaf nodes store the row record data of the entire table , and the leaf nodes of the clustered index are also called data pages. This feature of the clustered index determines that the data in the index-organized table is also part of the index. Like the B+ tree data structure, each data page is linked through a doubly linked list.

Since the actual data pages can only be sorted according to a B+ tree, each table can only have one clustered index . In most cases, the query optimizer tends to use a clustered index . Because the clustered index can find data directly on the leaf nodes of the B+ tree index . In addition, because the logical order of the data is defined, the clustered index can access queries for range values ​​particularly quickly . The query optimizer can quickly find that a certain range of data pages need to be scanned.

The data page stores the complete record of each row , and in the index page of the non-data page , only the key value and the offset to the data page are stored , rather than a complete row record.


The storage of the clustered index is not physically continuous, but logically continuous. There are two points in this:

  • One is that the pages mentioned above are linked through a doubly linked list, and the pages are sorted in the order of the primary key;
  • Another point is that the records in each page are also maintained through a doubly linked list , and the physical storage can also not be stored according to the primary key.

Another advantage of the clustered index is that it is very fast for sorting and range searching of the primary key . The data of the leaf node is the data that the user wants to query. If the user needs to query a table of registered users, query the last 10 users registered, because the B+ tree index is a doubly linked list, the user can quickly find the last data page and retrieve 10 records. If you use the command EXPLAIN to analyze, you can get:

1 row in set(0.00 sec)

It can be seen that although the records are sorted using ORDER BY, the so-called filesort operation is not performed in the actual process , and this is because of the characteristics of the clustered index.

The other is range query, that is, if you want to find data in a certain range of the primary key, you can get the page range through the upper intermediate node of the leaf node, and then you can read the data page directly, another example:

->SELECT * FROM Profile
->WHERE id>10 AND id<10000\G;
Extra:Using where
1 row in set(0.01 sec)

Execute EXPLAIN to get the execute plan of the MySQL database , and an estimated number of rows returned from the query result is given in the rows column . It should be noted that rows represents an estimated value, not an exact value. If you actually execute this SQL query, you can see that there are actually only 9946 rows of records:

mysql>SELECT COUNT(*)from Profile
->WHERE id>10 AND id<10000;
1 row in set(0.00 sec)

4.2 Auxiliary index (non-clustered index)

For a secondary index (Secondary Index, also known as a non-clustered index), the leaf node does not contain all the data recorded in the row. In addition to the key value, the leaf node also contains a bookmark in the index row of each leaf node. This bookmark is used to tell the InnoDB storage engine where to find the row data corresponding to the index. Since the InnoDB storage engine table is an index-organized table, the bookmark of the auxiliary index of the InnoDB storage engine is the clustered index key of the corresponding row of data.


The existence of auxiliary indexes does not affect the organization of data in the clustered index, so there can be multiple auxiliary indexes on each table. When looking for data through the auxiliary index, the InnoDB storage engine traverses the auxiliary index and obtains the primary key to the primary key index through the leaf-level pointer, and then finds a complete row record through the primary key index.

For example, if you search for data in an auxiliary index tree with a height of 3, you need to traverse the auxiliary index tree 3 times to find the specified primary key . If the height of the clustered index tree is also 3, then you also need to search for the clustered index The tree searches for 3 times and finally finds a page where a complete row data is located. Therefore, a total of 6 logical IO accesses are required to obtain the final data page.

Query OK,4 rows affected(0.24 sec)
Records:4 Duplicates:0 Warnings:0
mysql>UPDATE t SET c=0-a;
Query OK,4 rows affected(0.04 sec)
Rows matched:4 Changed:4 Warnings:0
mysql>ALTER TALBE t ADDKEY idx_c(c);
Query OK,4 rows affected(0.28 sec)
Records:4 Duplicates:0 Warnings:0
2 rows in set(0.00 sec)

The above shows the relationship between the auxiliary index idx_c and the clustered index in table t. You can see that the leaf node of the auxiliary index contains the value of column c and the value of the primary key . -1 is stored internally in the manner of 7f ff ff ff. The most significant bit of 7 (0111) is 0, which represents a negative value. The actual value should be inverted and then added 1, that is, -1.

4.3 Management of B+ Tree Index

There are two ways to create and delete indexes, one is ALTER TABLE, and the other is CREATE/DROP INDEX . The syntax for creating an index through ALTER TABLE is:

ALTER TABLE tbl_name

ALTER TABLE tbl_name

The syntax of CREATE/DROP INDEX is also very simple:

ON tbl_name(index_col_name,...)

DROP INDEX index_name ON tbl_name

The user can set to index the data of the entire column, or only index the first part of the data of a column , such as the table t created above, column b is varchar (8000), but the user can only index the first 100 fields, such as:

->ADD KEY idx_b(b(100));
Query OK,4 rows affected(0.32 sec)
Records:4 Duplicates:0 Warnings:0

If the user wants to view the index information in the table, he can use the command SHOW INDEX. Using the previous table t and adding a joint index idx_a_c for columns (a, c), we can get:

5 rows in set(0.00 sec)

Through the command SHOW INDEX FROM, you can observe that there are 4 indexes on table t, which are the primary key index, the auxiliary index on column c, the auxiliary index composed of the first 100 bytes of column b, and the joint auxiliary index of (a, c) .

  • Table: The name of the table where the index is located.
  • Non_unique: non-unique index, you can see that the primary key is 0, because it must be unique.
  • Key_name: The name of the index through which the user can execute DROP INDEX.
  • Seq_in_index: The position of the column in the index. It is more intuitive if you look at the joint index idx_a_c.
  • Column_name: The name of the index column.
  • Collation: In what way is the column stored in the index. Can be A or NULL. The B+ tree index is always A, that is, sorted. If the Heap storage engine is used and a Hash index is established, NULL will be displayed here. Because Hash stores index data according to the Hash bucket, instead of sorting the data.
  • Cardinality: Very critical value, which represents an estimate of the number of unique values ​​in the index . The number of rows in the Cardinality table should be as close to 1 as possible. If it is very small, the user needs to consider whether this index can be deleted.
  • Sub_part: Whether the part of the column is indexed. If you look at the index of idx_b, 100 is displayed here, which means that only the first 100 characters of column b are indexed. If the entire column is indexed, the field is NULL.
  • Packed: How the keywords are compressed. If it is not compressed, it is NULL.
  • Null: Whether the indexed column contains NULL values . You can see that idx_b is Yes here, because the defined column b allows NULL values.
  • Index_type: The type of index. The InnoDB storage engine only supports B+ tree indexes, so all BTREEs shown here are shown here. ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
  • Comment: Comment.

The Cardinality value is very critical, and the optimizer will determine whether to use this index based on this value . But this value is not updated in real time, that is, it is not updated every time the index is updated, because this is too costly. Therefore, this value is not very accurate, it is just a rough value.

If you need to update the index Cardinality information, you can use the ANALYZE TABLE command, such as:

mysql>analyze table t\G;
1 row in set(0.01 sec)

It is possible that Cardinality is NULL, and in some cases it may happen that the index is created but not used. Or perform EXPLAIN on two basically the same statements, but the final results are different: one uses an index, and the other uses a full table scan. The best solution at this time is to do an ANALYZE TABLE operation. Therefore, I suggest to do ANALYZE TABLE operation on several core tables under the application in an off-peak time, which can make the optimizer and index work better for you.

5. Cardinality value

5.1 What is Cardinality

Not all columns appearing in the query conditions need to be indexed. As for when to add a B+ tree index, the general experience is that it makes sense to use a B+ tree index when accessing a small part of the table. For the gender field, region field, and type field, their range of possible values ​​is very small, which is called low selectivity. Such as:

SELECT * FROM student WHERE sex='M'

When querying by gender, the range of possible values ​​is generally only'M' and'F'. Therefore, the result of the above SQL statement may be 50% of the data in the table (assuming the ratio of male to female is 1:1). At this time, it is completely unnecessary to add a B+ tree index. On the contrary, if the value range of a certain field is very wide, there is almost no repetition, that is, it belongs to high selectivity , then the B+ tree index is most suitable at this time. For example, for the name field, basically the same name is not allowed in an application.

How to check whether the index is highly selective? It can be observed through the column Cardinality in the SHOW INDEX results . The Cardinality value is very critical and represents the estimated value of the number of unique records in the index . At the same time, it should be noted that Cardinality is an estimated value, not an accurate value. Basically, it is impossible for users to get an accurate value. In practical applications, Cardinality/n_rows_in_table should be as close to 1 as possible. If it is very small, then the user needs to consider whether it is necessary to create this index.

Therefore, it is very necessary to add a B+ tree index to this field when accessing a field with a highly selective attribute and taking out a small part of the data from the table. Such as:

SELECT*FROM member WHERE usernick='David'

The table member has about 5 million rows of data. There is a unique index on the usernick field. At this time, if you find a user whose user name is David, you will get the following execution plan:

mysql>EXPLAIN SELECT * FROM member
->WHERE usernick='David'\G;
1 row in set(0.00 sec)

It can be seen that the usernick index is used, which is also in line with the high selectivity mentioned earlier, that is, the principle of selecting fewer rows in the table in the SQL statement.

5.2 Cardinality statistics of InnoDB storage engine

The premise of indexing is that the data in the column is highly selective , which is of practical significance to the database. But how does the database count Cardinality information? Because there are various storage engines in the MySQL database, and each storage engine implements B+ tree indexes differently, the statistics of Cardinality are performed at the storage engine layer.

In a production environment, index update operations may be very frequent . If Cardinality statistics are performed on the index every time an operation occurs, it will bring a great burden to the database. Another thing to consider is that if a table has very large data, such as a table with 50G data, the time required to count Cardinality information may be very long. This is also unacceptable in a production environment. Therefore, the statistics of the database for Cardinality are all done through the method of sampling (Sample).

6. The use of B+ tree index (emphasis)

6.1 The use of B+ tree index in different applications

Users already know that there are two types of applications in the database, OLTP and OLAP applications. In OLTP applications, the query operation only obtains a small part of the data from the database , generally it may be less than 10 records, or even one record in many cases, such as obtaining user information according to the primary key value, and obtaining according to the order number The detailed information of the order is a query statement of a typical OLTP application. In this case, after the B+ tree index is established, the use of the index should only obtain a small part of the data in the table through the index . At this time, it is meaningful to establish a B+ tree index, otherwise, even if it is established, the optimizer may choose not to use the index.

In OLAP applications, a large amount of data in the table needs to be accessed, and query results are generated based on these data. Most of these queries are analysis-oriented queries for the purpose of providing support for decision makers.

6.2 Joint Index

A joint index refers to indexing multiple columns on a table . In the previous discussion, only one column on the table is indexed. The method of creating a joint index is the same as that of a single index. The only difference is that there are multiple index columns. For example, the following code creates a t table, and the index idx_a_b is a joint index, and the joint column is (a, b).

a INT,
b INT,
KEY idx_a_b(a,b)

So when do you need to use a joint index? Before discussing this issue, let's take a look at the internal results of the joint index. Essentially, the joint index is also a B+ tree, the difference is that the number of key values ​​of the joint index is not 1, but greater than or equal to 2. Next, let's discuss the joint index composed of two integer columns, assuming that the names of the two key values ​​are a and b respectively.


The data is stored in the order of (a, b), and all the data can be read out logically through the leaf nodes. For the above example, that is (1, 1), (1, 2), (2, 1) ), (2, 4), (3, 1), (3, 2).

Therefore, for the query SELECT * FROM TABLE WHERE a=xxx and b=xxx, it is obvious that the (a, b) joint index can be used. For a single column a query SELECT * FROM TABLE WHERE a=xxx, you can also use this (a, b) index.

But for the query of column b SELECT*FROM TABLE WHERE b=xxx, this B+ tree index cannot be used. It can be found that the b value on the leaf node is 1, 2, 1, 4, 1, 2, which is obviously not sorted, so the (a, b) index is not used for the query of column b.

The second advantage of the joint index is that the second key value has been sorted . For example, in many cases, the application needs to query a user’s shopping status, sort by time, and finally fetch the last three purchase records. At this time, the use of a joint index can avoid another sorting operation, because the index itself is in the leaf The nodes are already sorted.

buy_date DATE
INSERT INTO buy_log VALUES(1,'2009-01-01');
INSERT INTO buy_log VALUES(2,'2009-01-01');
INSERT INTO buy_log VALUES(3,'2009-01-01');
INSERT INTO buy_log VALUES(1,'2009-02-01');
INSERT INTO buy_log VALUES(3,'2009-02-01');
INSERT INTO buy_log VALUES(1,'2009-03-01');
INSERT INTO buy_log VALUES(1,'2009-04-01');
ALTER TABLE buy_log ADD KEY(userid);
ALTER TABLE buy_log ADD KEY(userid,buy_date);

1. Both indexes contain the userid field. If you only query for userid, such as:

SELECT * FROM buy_log WHERE userid=2;

Possible_keys has two indexes available here, which are a single userid index and a joint index of (userid, buy_date). But the ultimate choice of the optimizer is the index userid, because the leaf node of the index contains a single key value, so theoretically a page (16KB) can store more records.

2. Next, assume that you want to retrieve the last 3 purchase records with userid 1

SELECT * FROM buy_log
WHERE userid=1 ORDER BY buy_date DESC LIMIT 3

For the above SQL statement, either userid index or (userid, buy_date) index can be used. But this time the optimizer uses the joint index userid_2 of (userid, buy_date), because buy_date is already sorted in this joint index . According to the joint index to retrieve the data, there is no need to do an additional sorting operation on the buy_date.

3. If the userid index is mandatory


You can see Using filesort in the Extra option, which means that an additional sorting operation is required to complete the query . Obviously this time the column buy_date needs to be sorted, because the buy_date in the index userid is unsorted.

6.3 Covering Index

The InnoDB storage engine supports covering index (covering index, or index covering), that is, you can get the query records from the auxiliary index without querying the records in the clustered index . One advantage of using a covering index is that the auxiliary index does not contain all the information recorded in the entire row, so its size is much smaller than the clustered index, so a large number of IO operations can be reduced.

For the secondary index of the InnoDB storage engine , because it contains primary key information, the data stored in its leaf nodes is (primary key1, primary key2,..., key1, key2,...). For example, the following statements can use only one auxiliary joint index to complete the query:

SELECT key2 FROM table WHERE key1=xxx;
SELECT primary key2,key2 FROM table WHERE key1=xxx;
SELECT primary key1,key2 FROM table WHERE key1=xxx;
SELECT primary key1,primary key2,key2 FROM table WHERE key1=xxx;

Another benefit of covering indexes is for certain statistical problems.


The InnoDB storage engine does not choose to perform statistics by querying the clustered index. Since there are auxiliary indexes on the buy_log table, and the auxiliary index is much smaller than the clustered index, choosing the auxiliary index can reduce IO operations , so the optimizer's choice is:


The possible_keys column is NULL, but the optimizer chose the userid index during actual execution, and the Using index of the column Extra column represents the covering index operation performed by the optimizer (the auxiliary index is smaller, and one page can hold more data. Statistics do not need all data).

Under normal circumstances, such as the joint index (a, b), it is generally not possible to select the so-called query conditions in column b. But if it is a statistical operation and a covering index, the optimizer will make a selection , such as the following statement:

WHERE buy_date>='2011-01-01'AND buy_date<'2011-02-01'

The table buy_log has a joint index of (userid, buy_date). Here, only the conditional query is performed based on column b. Under normal circumstances, the joint index cannot be performed. However, this SQL query is a statistical operation and can use the information of the covering index. Therefore, the optimizer will choose the joint index


possible_keys is still NULL, but the column key is userid_2, which means the joint index of (userid, buy_date). The Using index prompt can also be found in the column Extra, which is represented as a covering index.

6.4 The case where the optimizer chooses not to use an index

In some cases, when the EXPLAIN command is executed to analyze the SQL statement, it will be found that the optimizer does not choose the index to find the data, but obtains the data by scanning the clustered index, that is, directly scanning the entire table. This situation occurred in the scope of search, in the case JOIN link operation . E.g:

SELECT * FROM orderdetails
WHERE orderid>10000 and orderid<102000;

This SQL statement finds the order details with an order number greater than 10000. Through the command SHOW INDEX FROM orderdetails, the index can be observed:


You can see that the table orderdetails has a joint primary key of (OrderID, ProductID), in addition to a single index on the column OrderID. Obviously, the above-mentioned SQL can be searched for data by scanning the index on OrderID. However, through the EXPLAIN command, the user will find that the optimizer does not find the data according to the index on the OrderID


In the possible_keys column, you can see that the query can use the PRIMARY, OrderID, and OrdersOrder_Details three indexes, but in the final index use, the optimizer chose the PRIMARY clustered index, which is the table scan (table scan) , rather than the OrderID auxiliary index scan ( index scan).

Why is this? The reason is that the data to be selected by the user is the entire row of information , and the OrderID index cannot cover the information we want to query. Therefore, after the specified data is queried on the OrderID index, a bookmark access is needed to find the information of the entire row of data . Although the data in the OrderID index is stored sequentially, the data for bookmark search again is out of order, so it becomes a discrete read operation on the disk . If the amount of data required to be accessed is small, the optimizer will still choose the auxiliary index, but when the accessed data accounts for a large part of the data in the entire table (usually about 20%), the optimizer will choose to find through the clustered index data. As mentioned before, sequential reading is much faster than discrete reading.

Therefore, for the case where index coverage cannot be performed, the optimizer selects the auxiliary index when the data searched through the auxiliary index is small . This is determined by the characteristics of current traditional mechanical hard disks, that is, sequential reads are used to replace random reads. If the disk used by the user is a solid state drive, random read operations are very fast, and at the same time confident enough to confirm that the use of auxiliary indexes can bring better performance, then you can use the keyword FORCE INDEX to force the use of an index, such as:

SELECT * FROM orderdetails FORCE INDEX(OrderID)
WHERE orderid>10000 and orderid<102000;

6.5 Index hint

MySQL database supports index hints (INDEX HINT), which explicitly tells the optimizer which index to use. Personally summarize the following two situations may need to use INDEX HINT:

The optimizer of the MySQL database incorrectly selected an index, causing the SQL statement to run very slowly . This situation is very, very rare in the latest MySQL database version. The optimizer works very efficiently and correctly in most cases.

There are many indexes that can be selected for the SQL statement. At this time, the cost of the optimizer to choose the execution plan time may be greater than the SQL statement itself . For example, the optimizer's analysis of the Range query itself is a relatively time-consuming operation. At this time, the DBA or developer analyzes the optimal index selection, and uses Index Hint to force the optimizer not to perform cost analysis of each execution path, and directly select the specified index to complete the query.

6.6 Multi-Range Read optimization

MySQL version 5.6 began to support Multi-Range Read (MRR) optimization. The purpose of Multi-Range Read optimization is to reduce random access to the disk and convert random access into more sequential data access , which can greatly improve the performance of IO-bound SQL query statements. Multi-Range Read optimization can be applied to queries of range, ref, and eq_ref types.

MRR optimization has the following advantages:

  • MRR makes data access more sequential. When querying the auxiliary index, first, according to the obtained query results, sort according to the primary key, and perform bookmark search according to the order of the primary key sort.
  • Reduce the number of times that pages in the buffer pool are replaced.
  • Batch process query operations on key values.

For the range query and JOIN query operations of InnoDB and MyISAM storage engines, MRR works as follows:

  1. Store the auxiliary index key value obtained by the query in a cache, and then the data in the cache is sorted according to the auxiliary index key value.
  2. Sort the key values ​​in the cache according to RowID.
  3. Access the actual data file according to the sort order of RowID.

In addition, if the buffer pool of the InnoDB storage engine or the MyISAM storage engine is not large enough, that is, it cannot store all the data in the next table. At this time, frequent discrete read operations will also cause the pages in the cache to be replaced out of the buffer pool, and then continuously The ground is read into the buffer pool . If the access is performed in the order of the primary key, this repetitive behavior can be minimized.

SELECT * FROM salaries WHERE salary>10000 AND salary<40000;

There is an auxiliary index idx_s on salary, so in addition to searching the key value through the auxiliary index, it is also necessary to search the entire row of data through a bookmark search. When the Multi-Range Read feature is not enabled, the execution plan seen:


If the Mulit-Range Read feature is enabled, in addition to the Using index condition in the column Extra, you will also see the Using MRR option:


In the actual implementation, you will experience a huge difference in the execution time of the two:


In addition, Multi-Range Read can also split certain range queries into key-value pairs to perform batch data queries . The advantage of this is that some data that does not meet the query conditions can be directly filtered during the splitting process, for example:

WHERE key_part1>=1000 AND key_part1<2000
AND key_part2=10000;

Table t has a joint index of (key_part1, key_part2), so the index is sorted according to the positional relationship of key_part1, key_part2. If there is no Multi-Read Range, the query type is Range at this time, and the SQL optimizer will first fetch all data whose key_part1 is greater than 1000 and less than 2000, even if key_part2 is not equal to 1000. After extracting the row data, filter it according to the condition of key_part2 . This will cause useless data to be taken out. If there is a large amount of data and its key_part2 is not equal to 1000, enabling Mulit-Range Read optimization will greatly improve the performance.

If Multi-Range Read optimization is enabled, the optimizer will split the query conditions first, and then perform data query. As far as the above query statement is concerned, the optimizer will split the query conditions into (1000, 1000), (1001, 1000), (1002, 1000),..., (1999, 1000), and finally split them according to these Conditions for data query.

SELECT * FROM salaries
WHERE(from_date between'1986-01-01'AND'1995-01-01')
AND(salary between 38000 and 40000);

The table salaries has an index idx_s for salary. When the above SQL statement is executed, because the Multi-Range Read optimization is enabled, the query conditions are split, so that the Using MRR option can be seen in the column Extra.

6.7 Index Condition Pushdown (ICP) optimization

Like Multi-Range Read, Index Condition Pushdown is also an optimization method for querying based on indexes that MySQL 5.6 began to support. The previous MySQL database version does not support Index Condition Pushdown. When performing an index query, first find records based on the index, and then filter records based on WHERE conditions . After supporting Index Condition Pushdown, the MySQL database will determine whether the WHERE condition can be filtered while removing the index, that is, part of the WHERE filtering operation is placed on the storage engine layer . Under certain queries, the fetch of records by the upper SQL layer can be greatly reduced, thereby improving the overall performance of the database.

Index Condition Pushdown optimization supports range, ref, eq_ref, ref_or_null type queries, and currently supports MyISAM and InnoDB storage engines. When the optimizer selects Index Condition Pushdown optimization, you can see the Using index condition prompt in the Extra column of the execution plan.

Suppose a table has a joint index (zip_code, last_name, firset_name), and the query statement is as follows:

SELECT * FROM people
WHERE zipcode='95054'
AND lastname LIKE'%etrunia%'
AND address LIKE'%Main Street%';

For the above statement, the MySQL database can locate the record whose zipcode is equal to 95054 through the index, but the index does not help the lastname LIKE'%etrunia%'AND address LIKE'%Main Street%' of the WHERE condition . If Index Condition Pushdown optimization is not supported, the database needs to retrieve all records with zipcode equal to 95054 through the index, and then filter the two conditions after WHERE.

If the Index Condition Pushdown optimization is supported, when the index is retrieved, the WHERE condition filtering will be performed, and then the records will be retrieved. This will greatly improve the efficiency of the query. Of course, the condition that WHERE can filter is the range that the index can cover.

SELECT * FROM salaries
WHERE(from_date between'1986-01-01'AND'1995-01-01')
AND(salary between 38000 and 40000);

If Multi-Range Read optimization is not enabled, its execution plan is:


You can see that the column Extra has a hint of Using index condition. But why does the idx_s index here use Index Condition Pushdown optimization? Because the primary key of this table is a joint index of (emp_no, from_date), the idx_s index contains the data of from_date , so this optimization method can be used.


Second, the hash algorithm

Hash algorithm is a common algorithm with a time complexity of O(1), and it not only exists in the index, but the database structure exists in every database application. Imagine a problem. When the memory of the current server is 128GB, how can the user get a certain cached page from the memory? Although the query speed in the memory is very fast, it is impossible to traverse all the memory to search every time. At this time, only O(1) hash algorithm for dictionary operations is very useful.

1. Hash table

In the hash mode, the element is in h(k), that is, the location of the slot is calculated according to the key k using the hash function h. The function h maps the key field U to the slot of the hash table T[0...m-1].


Hash table technology solves the problems encountered by direct addressing, but there is a small problem in doing so. Two keywords may be mapped to the same slot. This situation is generally called a collision (collision). The simplest collision resolution technology is generally used in the database. This technology is called chaining.

In the linking method, all the elements that are hashed into the same slot are placed in a linked list, and there is a pointer in slot j, which points to the head of the linked list composed of all the elements that are hashed to j; if there is no such Element, then j is NULL.


The hash function h must be able to hash well . The best case is to avoid collisions. Even if it cannot be avoided, collisions should be minimized. Generally speaking, keywords are converted into natural numbers, and then implemented through division hashing, multiplication hashing or global hashing. The method of dividing and hashing is generally used in the database.

In the divisional hashing method of the hash function, the key k is mapped to one of the m slots by taking the remainder of k divided by m, that is, the hash function is:

h(k) = k mod m

2. The hash algorithm in the InnoDB storage engine

The InnoDB storage engine uses a hash algorithm to look up the dictionary, its conflict mechanism uses a linked list method, and the hash function uses a division hash method. For the hash table of the buffer pool pages, the Page pages in the buffer pool have a chain pointer , which points to the page with the same hash function value. For the divided hash, the value of m is a prime number slightly larger than 2 times the number of buffer pool pages.

For example, if the size of the current parameter innodb_buffer_pool_size is 10M, there are a total of 640 16KB pages. For the hash table of the buffer pool page memory, 640×2=1280 slots need to be allocated, but because 1280 is not a prime number, a prime number slightly larger than 1280 needs to be taken, which should be 1399, so 1399 will be allocated at startup The hash table of the slot is used to hash the pages in the buffer pool where the query is located.

The table space of the InnoDB storage engine has a space_id , and what the user wants to query should be a certain continuous 16KB page of a certain table space, that is, the offset offset. The InnoDB storage engine shifts the space_id to the left by 20 bits, then adds the space_id and offset, that is, the keyword K=space_id<<20+space_id+offset, and then hashes it into each slot through division.

3. Adaptive hash index

The adaptive hash index is created and used by the database itself, and the DBA itself cannot interfere with it. The adaptive hash index is mapped to a hash table by the hash function, so it is very fast to look up the dictionary type, such as SELECT*FROM TABLE WHERE index_col='xxx'. But there is nothing you can do about the range search.

Through the command SHOW ENGINE INNODB STATUS, you can see the current usage of the adaptive hash index, such as:

Ibuf:size 2249,free list len 3346,seg size 5596,
374650 inserts,51897 merged recs,14300 merges
Hash table size 4980499,node heap has 1246 buffer(s)
1640.60 hash searches/s,3709.46 non-hash searches/s

Now you can see the usage information of the adaptive hash index, including the size and usage of the adaptive hash index, and the usage of the adaptive hash index search per second. It should be noted that the hash index can only be used to search for equivalent queries , such as:

SELECT * FROM table WHERE index_col='xxx'

4. Hash index

Based on the hash table implementation, only queries that match all columns are valid. For each row of data, the storage engine calculates a hash code for all index columns. The hash code is a smaller value, and the hash codes calculated for rows with different key values ​​are different. The hash index stores all the hash codes in the index, and at the same time stores a pointer to each data row.


Only the MEMORY storage engine shows support for hash indexes.

Hash index limit

  1. Hash index data is not stored in the order of index values, so it cannot be used for sorting.
  2. The hash index does not support partial index column lookup , because the hash index always uses the entire content of the index column to calculate the hash code.
  3. Hash index only supports equivalent comparison queries, including =, IN(), and <=>, but does not support range queries , such as where price> 100.