Hello guys, I have been preparing for an internship recently, so the update frequency has been low, but I have been improving myself. Come on.
Today's record is: How to ensure cache consistency?
As long as you use the cache, it may involve dual storage and dual writes between the cache and the database. As long as you use dual writes, there will definitely be data consistency problems. So how do you solve the consistency problem?
When weak consistency is required:
Generally speaking, if you allow the cache to be slightly inconsistent with the database occasionally, that is to say, if your system does not strictly require "cache + database"
to maintain consistency, it is best not to do this solution, namely: read request Serialize with the write request and string into a memory queue.
Serialization can guarantee that there will be no inconsistencies, but it will also cause the throughput of the system to be greatly reduced, using several times more machines than normal to support an online request.
Cache Aside Pattern
Invalidation : The application first fetches the data from the cache, if not, fetches the data from the database, and then puts it in the cache after success.
Hit : The application fetches data from the cache and returns after fetching it.
Update : First save the data in the database, and then invalidate the cache after success.
Everyone is familiar with the reading part. First read the cache. If there is no hit in the cache, read the underlying database and other storage media, return the data, and set the cache.
There are some controversies in the written part, and there are many ways to circulate on the Internet. A few simple analysis:
1. Update the cache first, then write the database
At the same time, there are request A and request B to update operation, then there will be
(1) Thread A updated the database
(2) Thread B updated the database
(3) Thread B updated the cache
(4) Thread A updated the cache
It appears that requesting A to update the cache should be earlier than requesting B to update the cache, but due to network and other reasons, B updated the cache earlier than A. This leads to dirty data, so it is not considered.
2. Delete the cache first, then update the database
The reason for the inconsistency of the program is. At the same time, there is a request for A to perform an update operation, and another request for B to perform a query operation. Then there will be the following situation:
(1) Request A to write and delete the cache
(2) Request B to query and find that the cache does not exist
(3) Request B to query the database to get the old value
(4) Request B to write the old value into the cache
(5) Request A to write the new value to the database
This thread safety problem needs to be solved by delaying double deletion and other solutions
The general strategy is:
(1) Eliminate the cache first
(2) Write the database again (the two steps are the same as before)
(3) Sleep for x seconds, and eliminate the cache again
3. Update the database first, then update the cache
So, is there no concurrency problem with Cache Aside? No, for example, one is a read operation, but the cache is not hit, and then the data is retrieved from the database. At this time, a write operation occurs. After the database is written, the cache is invalidated. Then, the previous read operation rewrites the old data. Put the data in, so it will cause dirty data.
This case will appear in theory, but in fact, the probability of occurrence may be very low, because this condition needs to occur when the cache is read and the cache is invalid, and there is a concurrent write operation. In fact, the write operation of the database is much slower than the read operation, and the table must be locked. The read operation must enter the database operation before the write operation, and it is later than the write operation to update the cache. All these conditions are met. The probability is basically not great.
to sum up:
In distributed systems, either 2PC or Paxos protocols are used to ensure consistency, or they are desperately reducing the probability of dirty data during concurrency
The applicable scene of the cache system is the scene of non-strong consistency, so it belongs to the AP and BASE theory in CAP.
There is no way for heterogeneous databases to have strong consistency. We just reduce the time window to achieve final consistency.
And don’t forget to set the expiration time, this is a solution