1. Cache processing flow
2. Cache penetration
Cache penetration refers to data that is not in the cache or the database, and users continue to initiate requests, such as initiating data with an id of -1 or data with an id of very large non-existent. The user at this time may be an attacker, and the attack will cause excessive pressure on the database, causing a large number of requests to fall on the database.
1. Add verification at the interface layer, such as id for basic verification, and direct interception if id<=0.
2. The data that cannot be retrieved from the cache is also not retrieved in the database. At this time, the key-value pair can also be written as key-null, and the effective time of the cache can be set to a short point, such as 30s (setting too long will cause normal conditions Can't be used). This can prevent the attacking user from repeatedly using an id brute force attack.
3. Bloom filter.
3. Cache breakdown
Cache breakdown means that there is no data in the cache but the data in the database (usually when the cache time expires). At this time, because there are so many concurrent users, and the data is not read in the read cache at the same time, the data is retrieved from the database at the same time, causing database pressure Increase instantly, causing excessive pressure.
1. Set hot data to never expire
2. Add mutex lock
4. Cache avalanche
Cache avalanche refers to a large area of cache failure at the same time, and subsequent requests directly fall on the database, causing the database to withstand a large number of requests in a short time, just like an avalanche.
Cache breakdown refers to a piece of data, and cache avalanche refers to the failure (expiration) of a large number of data at the same time.
1. The expiration time of cached data is set randomly to prevent a large amount of data from expiring at the same time.
2. If the cache database is a distributed deployment, distribute the hot data evenly in different databases.
3. Set hotspot data to never expire.
4. Adopt Redis cluster to avoid problems in a single machine and the entire cache service cannot be used.
5. Current limit, avoid processing a large number of requests at the same time.