High-performance mysql (third edition) notes (two)

Query performance optimization

Confirm whether the application is retrieving a large amount of data that exceeds the result row.

This usually means too many rows or too many columns are visited
  1. You can use limit
  2. Related query table fields write required fields

Confirm mysqlwhether the server layer is analyzing a large number of data rows that exceed the requirements;

  1. Use a covering index to put the columns that need to be queried into the index
  2. Change the library table structure
  3. Rewrite this complex query

Break complex and large queries into multiple small queries

I used to think that network communication, query parsing and optimization are expensive things, but this idea is not applicable to MySQL. MySQL is designed to make connections and disconnects very lightweight, and then return a small The query results are very efficient. Sometimes it is necessary to consider decomposing a large query into multiple small queries
  1. Such as decomposing large delete statements
  2. Decompose associative query

Basic principles of query execution***

[External link image transfer failed. The source site may have an anti-leech link mechanism. It is recommended to save the image and upload it directly (img-UX9rD2Gg-1622622158417)(en-resource://database/2664:1)]

mysqlThe client and server are based on a half-duplex communication protocol, which means that at any time, either the server sends data to the client, or vice versa;
Associated query
MySQLThink that any query is an "association", not just a query that requires two tables to match before it is called an association, so in MySQLthis, every query and every fragment (including sub-queries, even based on single-table queries) It can be an association
. The essence of an associated query is a "nested loop query", forwhich is a nested forloop inside a loop.
Generally, in an associated query, you need to add an index in the associated field of the second table in order

Subquery, if the child table large, then you can use existsmore appropriate, because the existsonly appearance of the traverse length, and then determine whether the child table subquery is trueand inwill be nested loop through the outer and sub-tables in order to find the results, so the subquery If the neutron meter is small, it can be usedin

In use unionthe query, may be used in each clause of the query limitin order to reduce the amount of data from a database query
using UNION ALL as possible, which can be placed in a temporary table in the search results, without the use of distinctfiltering data in UNION Would usedistinct
When you need to query the number of different types in a column, you can use count, or SUM(IF(expr))
	COUNT(delete_status = 1 OR NULL) AS del,
	COUNT(delete_status = 0 OR NULL) AS normal
	sum(IF(delete_status= 1 , 1, 0)) AS del,
	sum(IF(delete_status= 0 , 1, 0)) AS normal
 --- 或者
	sum(delete_status= 1 ) AS del,
	sum(delete_status= 0 ) AS normal
In mysql versions below 5.6, using relational queries is better than subqueries
Temporary table without any index
GROUP BY optimization
If you do not pass ORDER BYclause explicitly specify the sort column, when a query uses GROUP BYclause when the result set automatically according to the grouping field sort order if you do not care about the result set, and this in turn led to the default sort files sort "filesort" , You can useORDER BY NULL
Optimize limit
Common limit queries are:
SELECT film_id, descreption FROM sakila.film ORDER BY title LIMIT 50,5;

If the table is very large, then this query is best rewritten as follows:

SELECT film.film_id, film.description FROM sakila.film INNER JOIN ( SELECT film_id FROM sakila.film ORDER BY title LIMIT 50,5 ) AS lim USING(film_id);

The above statement uses the covering index to query the id, so that mysqlas few pages as possible are scanned

Optimization Strategy

If you already know the limitsearch access, you can use the range search, or you can use the bookmark to get the location of the last fetched data, and the next time you can start scanning directly from the location of the bookmark record

You can use the cache, first store one thousand rows of data to meet the query requirements, and then load another thousand rows of cache if the query is greater than one thousand rows;