[Redis] Redis high concurrency processing strategy

In many real business scenarios, people often use Redis as a cache. It has high performance, rich support data structure, and a variety of advantageous features.

In daily business, the usual request processing process is: the business system has a request coming in, first check the cache, and then check the DB layer if no data is found, write back to the cache after the hit, and then return the data. This mode is called the "depending on load" mode. Such caching strategy generally does not encounter problems, but when the business platform becomes larger and the number of users becomes larger and larger. The platform should start to consider whether the system can support it under high concurrency scenarios. Especially when considering a large number of requests in an instant, how to design the system.

Insert picture description here

There is read and write pressure, and the database can be divided into databases and tables, but based on cost considerations, database nodes cannot be increased indefinitely. Moreover, in high-concurrency scenarios, the read and write speed of the database is still not fast enough to support the pursuit of higher throughput and faster response speed. Therefore, considering the high-performance features of Redis, it is necessary to consider what kind of cache processing strategy to adopt at this time.

1. Data warm-up (avoid lazy loading)

Suppose a scenario: Just after 0 o’clock on Double Eleven, a large number of order requests arrived. If we use the traditional "lazy loading" cache mode at this time, then these large numbers of requests will be received in an instant. At the DB layer, the service system is directly down. At this time, the cache has not had time to produce any effect. After the current service is suspended, it may affect other related services, resulting in a service avalanche.

One solution to the above scenario is to use data preheating. Before the arrival of a foreseeable large number of requests, we can write related data in batches to the Redis cache through manual or timed tasks, so as to avoid a large number of basic data requests directly hitting the database, thereby reducing database The pressure of reading and writing indirectly protects the database.

2. Hash the cache expiration time (to avoid cache avalanche)

The concept of cache avalanche is that the expiration time of cached data in Redis is too concentrated, and after a certain time, a large number of caches will be invalidated. In a scenario where a large number of cache failures and high concurrency occur, a large number of requests will break through the cache and directly hit the database.

One solution is to hash the expiration time, specifically by setting an expiration time base, plus a random number as the offset value. In this way, the centralized cache invalidation caused by the cache expiration time can be avoided, thereby avoiding the cache avalanche problem.

In fact, if the cache expiration time cannot be hashed out, another solution is to obtain the latest data in batches before the cache expires, and update the cache time to reset the cache time by means of timed tasks. In this way, it can be ensured that the cache key will always exist and no cache avalanche will occur.

3. Distributed lock

Through the above method, a large number of cache failures can be avoided. However, when a cache keyword expires, the cached data still has to be invalidated. At this time, in a high-concurrency scenario, a large number of requests will still be directly hit to the database. At this time, distributed locks can be used to control the process of cache reconstruction to protect the database.

Insert picture description here

The realization principle of this scheme is that when a certain hotspot data is found to be invalid, the distributed lock is enabled. At this time, if there are a large number of requests to access the application system, then we only allow one thread to successfully lock and allow it to access the database to rebuild the cache, and other threads must wait for the cache to be rebuilt and retrieve data from the cache to protect the database.

4. Hash the data

If a certain piece of data is too hot and massive requests keep coming, a single Redis node may not be able to support such a high load. Then you can use the scheme of hashing the data.

The specific operation is: Make multiple copies of hot data keys and evenly hash them into multiple Redis clusters, so that data requests can be evenly distributed to these Redis nodes, sharing the read pressure and alleviating the load of the cache.

5. Defense cache penetration

We have already described the cache penetration scenario above. When our business system receives a large number of malicious requests or is attacked by hackers to query data that is neither in the cache nor in the database, the cache loses its role in protecting the database. To deal with this situation, you can usually cache empty data or use BloomFilter.

Cache empty data

As the name implies, the key whose database query result is empty is also stored in the cache. When a subsequent query request for the key occurs, the cache directly returns the null value null, which prevents it from accessing the database.

Insert picture description here

However, there are some drawbacks to caching empty objects:

  • The null value is cached, which means that more keys are stored in the cache, and these keys themselves will take up more memory space.
  • The data in the cache and the database may be inconsistent with the data for a period of time, which may affect the business. For example, a keyword with a null value originally stored in the cache is updated in the database to obtain a non-empty value, but this data is not synchronized to the cache in time, which will affect related businesses.

In view of the above problems, if it is determined that the data corresponding to the key will not have valid data, then a shorter expiration time can be set for this type of key, and it can be automatically deleted to free up memory. As for the issue of data consistency, the updated data of the database can be synchronized to the cache in time through various strategies. The most direct way is to perform a synchronization operation on the cache of this key every time the database data is updated.

BloomFilter

The basic principle of Bloom filter is that one bit can identify one piece of data, so a lot of space can be saved. The specific algorithm is not repeated here. After Redis 4.0, Bloom filter has been loaded into Redis Server as a plug-in.

Insert picture description here

When a query request comes in, it will first go to the Bloom filter to query whether the key exists. If it does not exist, the data will not exist in the database. Therefore, the cache does not need to be queried, and an empty result is returned directly. If it exists, continue to execute according to the normal process, query the cache first, and then query the database when there is no data in the cache.

Cache empty data VS BloomFilter

For some malicious attacks, the query keys are often different, and the amount of data is particularly large. At this point, the first scheme is at a great disadvantage. Because it needs to store all the keys of empty data, and the keys of these malicious attacks are often different and not easy to predict, and the same key is often only requested once. Therefore, even if the keys of these empty data are cached, they cannot be used to protect the database because they are not used for the second time.

Therefore, for scenarios where the keys of empty data are different and the probability of repeated key requests is low, the second option should be selected. For scenarios where the number of keys for empty data is limited and the probability of repeated key requests is high, the first option should be selected.