Table structure design
1. Primary key
In actual projects, the primary key id is recommended to use database self-incrementing ID (type is bigint) and random ID generated by the snowflake algorithm.
For small business volume, use self-incrementing ID; for large business volume, snowflake algorithm is recommended.
Disadvantages of using self-increment id:
1. If the self-increasing id is exposed, it is easy to be discovered by others
2. In the case of high concurrency, the innodb engine will cause obvious lock contention when inserting by the primary key, and the upper bound of the primary key will become a hot spot for contention.
3. After the data volume of a single table reaches a certain level, it needs to be divided into databases and tables, which leads to duplicate IDs, which is troublesome to solve
2. Foreign keys
Do not use foreign keys to associate with other tables to avoid performance problems in high-concurrency scenarios
1. The foreign key greatly affects the concurrency performance, because when there is a foreign key constraint, MySQL will perform an instant check, and every insert and update must scan whether the record is satisfied.
2. The degree of coupling is high, and it is difficult to sub-databases and tables in the later stage
3. Appropriate field type and length
Database resources are very precious. Appropriate field types and lengths not only save database table space and index storage space, but more importantly, improve retrieval speed
1. Resolutely use char/tiyint and other types for fixed length
2. The varchar type is used for not fixed length but the total length is determined
3. Don't use varchar/char to store long strings, use text directly. And let the long string split into another table, keep the main table as thin as possible
4. Field redundancy
Allows appropriate redundancy of the fields of other tables to improve query performance, but data must be considered consistent and not redundant fields that are too long
5. Field default value
Avoid setting the field default value to null
For MySQL, it will make the comparison of indexes, index statistics and values more complicated.
NULL will participate in the field comparison, so it has a partial impact on efficiency, such as !=, <>, etc.
1. Covering Index
For count and group scenarios, use a covering index to improve query performance.
The index is like a catalog of a book. If the content of the query is only related to the content on the catalog, then mysql can get the query result by scanning the index structure. For example, I added an index to the student table:
ALTER TABLE `student` ADD INDEX index_name (name,gender);
SQL statement covered by index:
select name,gender from student; select name,gender from student where name='不高兴就喝水' and gender=1; select name,gender from student group by name,gender; select name,gender,count(1) from student group by name,gender; select name from student group by name; select name ,count(1) from student group by name;
2. Composite index
When designing an index, try to use a composite index, and put the highly distinguished fields in the front
So what is a highly distinguished field?
Execute the following statement, assuming that the query result is 0.9, 0.1, 1000, you can see that the name column has the highest selectivity, so use it as the first column of the joint index, that is, create a joint index of (name, gender)
select count(distinct name) / count(*), count(distinct gender) / count(*), count(*) from student
According to the leftmost matching principle of the index, the SQL statement that can trigger this joint index is:
select name,gender from student where name="不高兴就喝水" and gender=1 select name,gender from student where gender=1 and name="不高兴就喝水" ; select name,gender from student where name="不高兴就喝水"; select name,gender from student where age=18 name='不高兴就喝水';
3. Index failure
The following operations will cause the index to become invalid:
1. Do calculations, functions, conversion types and other operations on the index column
2. Violation of the leftmost matching principle
3. Like starts with a wildcard (for example:'% drink water')
4. Prevent implicit conversion, such as: the index field is a string type, and single quotes are not added when querying (name is vachar type, where name = 1 when querying)
5.or connect, etc...
4. Unique index
For the fields that need to be guaranteed to be unique in the table, even if the verification is done at the application layer, a unique index must be established
Note: In terms of performance, the performance of the unique index during query is higher than that of non-clustered index, but it is lower than non-clustered index during insert and update
5. Long character index
When creating an index on a long-length field, you must specify the index length, and it is not necessary to index all fields
The length of the index and the degree of discrimination are a pair of contradictions. Generally, for the field of string type, the length of the index is set to 20, and the degree of discrimination will be as high as 90%. You can use the following SQL to determine the degree of discrimination:
select count(distinct left(列名,索引长度)) / count(*) from 表名
Most of the time, the delete operation should be logical deletion, not physical deletion.
We must admit that data is invaluable. In many cases, the value of data is much higher than labor costs.
Database accounts in the formal environment often do not have delete permissions to avoid misuse, delete the database and run away, etc.
And update operation has higher performance than delete
Although MySQL has no specific limit on the number of in, it does limit the length of the entire SQL statement. Do not perform the operation of in a large number of data sets. If it is unavoidable, you can query in batches and in a certain number of sets at a time.
3. Use of inner join
When we use relational queries, the efficiency of using small tables to drive large tables will increase a lot. And inner join will automatically optimize small tables to drive large tables
4. Triggers and stored procedures
Avoid using triggers and stored procedures, not to mention that it is difficult to debug and extend, and there is no portability. These edge functions are best not used.
mysql5.7 optimizes count(*)
So now the execution efficiency of count(*) and count(1) is the same.
And count (field) because of the sql parsing process, not only the efficiency will be slow, but also the null value will not be counted
6. Avoid big business
Large transactions are transactions that run for a long time. Large transactions that operate on more data
will affect the performance of the database. You should try to split large transactions into several small transactions. It is forbidden to write overly complex SQL statements, except for large transactions. , It will make others big and unmaintainable.
This is what I can think of at present, welcome to add~