A brief introduction to Redis

1. A brief introduction to Redis

Redis is a non-relational database developed based on the C language.
Redis data is stored in memory, so the read and write speed is very fast.
Common application scenarios of Redid include: caching.

2. What is the processing flow of cached data?

Insert picture description here

1. If the data requested by the user is directly returned in the
cache 2. If it does not exist in the cache, it depends on whether it exists in the
database 3. If the database exists, the data in the cache is updated
4. If the database does not exist, it returns empty data

3. Why use Redis as a cache

Insert picture description here

High performance:
If the user accesses some data in the database for the first time, this process is relatively slow. After all, it is read from the hard disk. However, if the data accessed by the user is high-frequency data and not often If we change, then we can safely store the data accessed by the user in the cache.

In this way, the next time the user accesses the data, it can be directly obtained from the cache. Operating the cache is to directly manipulate the memory, so the speed is quite fast.

However, to ensure the consistency of the data in the database and cache. If the corresponding data in the database is changed, the corresponding data in the cache should be changed synchronously.

High concurrency:
Generally, QPS like MySQL is about 1W (4 core 8G), but after using Redis cache, it is easy to reach 10w+ concurrency, and even the highest can reach 30W+ (for redis cluster, it will be higher).

QPS (Query Per Second): The number of queries that the server can execute per second

Therefore, the number of database requests that can be directly operated by the cache is much greater than that of directly accessing the database, so we can consider transferring some of the data in the database to the cache, so that part of the user's requests will go directly to the cache without going through the database. Furthermore, I also increase the overall concurrency of the system.