Synchronize local locks and reids locks


Local lock

A bit faster.
Disadvantages. In a distributed situation, only the current node service can be locked.

Distributed lock

Compared with local locks, it is more heavyweight.
Advantages: It can lock the services of all nodes.
Mode 1: Redis setnx implements distributed locks. It is necessary to ensure the atomicity of locking, the atomicity of unlocking, and the automatic renewal of business timeout locks.
Mode 2 : redission implements distributed locks

The problem of using redis' setnx to implement distributed locks

Set the timeout time, but there is still a problem. What if an exception occurs when setting the timeout time? Ensure the atomicity of setnx . When setting the value, the expiration time is set.
But for example, if it is set for 10 seconds, the business is executed for 30 seconds, and again Will something go wrong?

  1. Business timeout, the lock deleted after execution is actually the lock of B after other thread B comes in.
The value is set to a uuid, and you get this value when you delete the lock, see if it is the same as the uuid when you set it, and delete it only if it is the same.
  1. When obtaining this uuid, it takes a long time to remotely obtain your own uuid, but after obtaining it, the key has become invalid and becomes someone else's, and problems will occur again.
Use Lua script to unlock to ensure the atomicity of unlocking
String uuid = UUID.randomUUID().toString();
// Lua 脚本
String script = "if'get',KEYS[1]) == ARGV[1] then return'del',KEYS[1]) else return 0 end";
redisTemplate.execute(new DefaultRedisScript<Long>(script, Long.class), Arrays.asList("lock"), uuid);
  1. There is another problem, the business has not been executed yet, the set lock has timed out, and it needs to be automatically renewed.
  • For simple processing, set the timeout time long enough, but although this is simple, it is not the best solution
  • Automatic renewal


redis distributed lock
reference documentation
using redisson

  1. Introduce dependencies

  1. Configure the bean
public class MyRedissonConfiguration {

   * 所有对redisson的操作, 都通过这个对象
   * @return
   * @throws IOException
  @Bean(destroyMethod = "shutdown")
  public RedissonClient redisson() throws IOException {
    Config config = new Config();
    return Redisson.create(config);


redisson automatic renewal

  1. Automatic renewal of the lock. If the business is too long, it will be renewed automatically. Don’t worry about the long business time and the automatic lock expiration is deleted. The default time added is 30 seconds.
  2. When the locked business is completed, the current lock will no longer be renewed.Even if the lock is not manually deleted, it will be automatically deleted after 30 seconds, so there will be no deadlock problem.

Reentrant Lock

public void test(){
    RLock lock = redissonClient.getLock("my-lock");

    try {
      System.out.println("加锁成功, 执行业务..." + Thread.currentThread().getId());
    } catch (InterruptedException e) {
    } finally {

If you use lock.lock(10, TimeUnit.SECONDS) to set the timeout time, it will not be automatically renewed. If the business execution is longer than 10 seconds, problems will occur. If you do not specify a time, it will default to 30 seconds, see The door dog is automatically renewed. The
tryLock method can specify the maximum waiting time to acquire the lock. If the lock is not obtained at the specified time, it will not wait.

Best practice

It is still recommended to customize the time, set to 30 seconds.

Cache consistency problem

How to keep the data in the
database consistent with the cache? If the data in the database is modified, the data and cache are inconsistent. There are two solutions

  1. Double write mode: write to the database and write to the cache
  2. Failure mode: Write to the database, delete the cache, and get it from the database and put it in the cache the next time it is read.

Write mode:
read-write and lock: suitable for reading more and writing less data, otherwise a lot of locks will be very slow.
Introduce canal+binlog to sense the update of the database to update.
Read and write more and directly go to the database to query

Double write mode, failure mode, and distributed multi-threading are all unsafe. Can be locked


Many methods need to use distributed cache, and it is more troublesome to write in this way.You can use springcache to integrate and simplify the operation.

  1. Introduce dependencies
<!-- spring cache -->

The automatically imported cache
CacheAutoConfiguration will automatically import RedisCacheConfiguration, and RedisCacheManager is automatically configured.

What are we doing?
Configure yaml

    type: redis
    # 缓存时间, 如果不在配置文件中配置CacheProperties的话, 这里也不会生效
      time-to-live: 3600000
      # 缓存的key前缀
      key-prefix: cache_
      # 开启key前缀
      use-key-prefix: true
      # 是否缓存空值, 缓存空值可以防止缓存穿透
      cache-null-values: true

Use cache

@Cacheable: Trigger saving of data to the cache.

@CacheEvict: Trigger to delete data from the cache, invalidate mode, and clear the cache after modification.

@CachePut: Update the cache without interfering with the execution of the method, double-write mode, write the updated data into the cache, double-write mode must have a return value.

@Caching: Recombination of multiple caching operations above.

@CacheConfig: Share some common cache-related settings at the class level.

  1. Enable caching
public class CouplingProductApplication {
  1. Only need to use annotations to complete the cache operation
// 缓存放到category中, 如果缓存中有数据, 就会直接从category缓存中取数据, 不会执行方法了, 如果没有, 就会执行这个方法, 将返回的结果放入category缓存中.
@Cacheable(value = {"category"}, key = "")
public List<CategoryEntity> getLevel1Categories() {
  List<CategoryEntity> categories = baseMapper.selectList(new QueryWrapper<CategoryEntity>().eq("cat_level", 1));
  return categories;
  1. Configuration, for example, value is serialized with json
// 不然配置无法生效
// 开启缓存
public class MyCacheConfiguration {
  public RedisCacheConfiguration redisCacheConfiguration(CacheProperties cacheProperties){
    RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig()
    // string序列化key
    .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
    // 值用json序列化, 默认是byte
    .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericFastJsonRedisSerializer()))

    Redis redis = cacheProperties.getRedis();

    // 下面都是从RedisCacheConfiguration拿来的
    if (redis.getTimeToLive() != null) {
			redisCacheConfiguration = redisCacheConfiguration.entryTtl(redis.getTimeToLive());
		if (redis.getKeyPrefix() != null) {
			redisCacheConfiguration = redisCacheConfiguration.prefixCacheNameWith(redis.getKeyPrefix());
		if (!redis.isCacheNullValues()) {
			redisCacheConfiguration = redisCacheConfiguration.disableCachingNullValues();
		if (!redis.isUseKeyPrefix()) {
			redisCacheConfiguration = redisCacheConfiguration.disableKeyPrefix();

    return redisCacheConfiguration;
  1. Clear the cache when modifying data
// key的值是查询放入缓存对应的方法签名, 因为用的是el表达式, 所以key的值要加上单引号
// 删除单个
//@CacheEvict(value = {"category"}, key = "'getLevel1Categories'")
// 删除多个第一种方式
//@Caching(evict = {@CacheEvict(value = {"category"}, key = "'getLevel1Categories'"), @CacheEvict(value = {"category"}, key = "'xxx2'")})
// 删除多个第二种方式
@CacheEvict(value = {"category"}, allEntries=true)
public void updateCascade(CategoryEntity category) {

Disadvantages of spring cache

Only when there is a sync = true in the @Cacheable annotation, it will only add a local synchronization lock synchronize during the query.
Therefore, the read mode spring cache plus sync = true can prevent the cache breakdown problem, although it is a local lock, but If the service has 10 nodes, it will only check the database 10 times at most, which has no effect, and it does not need to add distributed locks.

But for the writing mode, spring cache is not managed, and there is no lock

So to summarize: regular data (read more and less write, immediacy, and low consistency requirements) can use spring cache.
Special data: special processing, such as using canal