Redis non-relational database

Redis

Introduction to Redis

1 Introduction

1. Due to the large number of users, the amount of requests has also increased, and the data pressure is too large
. 2. The data is not synchronized
between multiple servers. 3. The locks between multiple servers are no longer mutually exclusive.
Insert picture description here

2.NoSQL

Redis can be a NoSQL
NoSQL->non-relational database->not only SQL
  • 1.key-value: redis
  • 2. Document type: ElasticSearch, Solr, MongoDB
  • 3. Column-oriented: Hbase, Cassandra
  • 4. Graphicalization: Neo4j
Except that relational databases are non-relational databases,
NoSQL is just a concept, which generally refers to a distinction between non-relational databases and relational databases.

3. Introduction to Redis

An Italian is developing a LLOOGG statistics page. Because of the poor performance of MySQL, he developed a non-relational database, named Redis
Redis (Remote Dictionary Server), which is a remote dictionary service. Redis is written in C language. Yes, Redis is a key-value NoSQL, and Redis stores data based on memory. Redis also provides a variety of persistence mechanisms. The performance can reach 110000/s to read data and 81000/1s to write data. Redis also provides The master-slave, sentinel and cluster construction methods can be more convenient for horizontal expansion and vertical expansion

Redis installation

1. Install Redis

version: '3.1'
  services:
    redis:
      image: daocloud.io/library/redis:5.0.7
      restart: always
      container_name: redis
      environment:
        - TZ=Asia/Shanghai
      ports:
        - 6379:6379

2. Use redis-cli to connect to redis

Enter the redis container
docker exec -it container id bash
inside the container, use redis-cli to connect
Insert picture description here

3. Use a graphical interface to connect to redis

Insert picture description here

Redis common commands

1. Redis storage data structure

5 commonly used data structures
  • key-string: A key corresponds to a value (most commonly used, generally used to store a value)
  • key-hash: A key corresponds to a map (stores an object data)
  • key-list: A key corresponds to a list (using the list structure to implement the stack and queue structure)
  • key-set: A key corresponds to a set (operations of intersection, difference and union)
  • key-zset: A key corresponds to an ordered set (ranking list). The
    other three data structures
    HyperLogLog: Calculate the approximate value
    GEO: Geographical location
    BIT: Generally stored is a string, and stored is a byte[]
Insert picture description here

2.String commonly used commands

# 1.添加值
set key value
# 2.取值
get key
# 3.批量操作
mset key vlue [key,value...]
mget key [key...]
# 4.自增命令(自增1)
incr key
# 5.自减命令(自减1)
decr key
# 6.自增自减指定数量
incrby key increment
decrby key increment
# 7.设置值得同时,指定生存时间(每次向redis添加数据时,尽量都设置上生存时间)
setex key second value
# 8.设置值,如果这个值不存在的话(如果这个key存在的话,什么事都不做,如果这个key不存在,和set命令一样)
setnx key value
# 9.在key对应的value后,追加内容
append key value
# 10.查看value字符串的长度
strlen key

3.hash commonly used commands

# 1.存储数据
hset key field value
# 2.获取数据
hget key field
# 3.批量操作
hmset key field value[field value...]
hmget key field [filed...]
# 4.自增
hincrby key field increment
# 5.设置值(如果key-field不存在,那么就正常添加,如果存在就不做任何事情)
hsetnx key field value
# 6.检查field是否存在
hexists key field
# 7.删除key对应的某一个field
hdel key field
# 8.获取当前hash结构中的全部field和value
hgetall key
# 9.获取当前hash结构中的全部field
hkeys key
# 10.获取当前hash结构的全部value
hvals key
# 11.获取当前hash结构的field的数量
hlen key

4.list commonly used commands

# 1.存储数据(从左侧插入数据,从右侧插入数据)
lpush key value [value...]
rpush key value [value...]
# 2.存储数据(如果key不存在,什么事都不做,如果key存在,但是不是list结构,什么都不做)
lpushx key value
rpushx key value
# 3.修改数据(在存储数据时,指定好的你的索引位置)
lset key index value
# 4.弹栈方式获取数据(左侧弹出数据,右侧弹出数据)
lpop key
rpop key
# 5.获取指定索引范围的数据(star从0开始,stop输入-1,代表最后一个,倒数第二个为-2)
lrange key start stop
# 6.获取指定索引位置的数据
lindex key index
# 7.获取整个列表的长度
llen key
# 8.删除列表中的数据(删除当前列表中count个value的值,count>0从左侧往右侧删除,反之亦然,count==0删除全部)
lrem key count value
# 9.保留列表中的数据(范围外的删除)
ltrim key start stop
# 10.将一个列表中最后一个数据,插入到另外一个列表的头部位置
rpoplpush list1 list2

5.set common commands

# 1.存储数据
sadd key member [menmber...]
# 2.获取数据(获取全部数据)
smember key
# 3.随机获取一个数据(获取的同时移除数据,count默认为1)
spop key [count]
# 4.交集(取多个set集合交集)
sinter set1 set2...
# 5.并集(获取全部集合中的数据)
sunion set1 set2...
# 6.差集(获取多个集合中不一样的数据)
sdiff set1 set2...
# 7.删除数据
srem key number [menmber...]
# 8.查看当前的set集合中是否包含这个值
sismember key member

Common commands of zset

# 1.添加数据(score必须是数值,menber不允许重复的)
zadd key score menber [score member]
# 2.修改member的分数(如果member是存放在key中的,正常增加分数,如果member不存在,这个命令就相当于zadd)
zincrby key increment menber
# 3.查看指定的member的分数
zscore key member
# 4.获取zset中数据的数量
zcard key
# 5.根据score的范围查询menber数量
zcount key min max
# 6.删除zset中的成员
zrem key member [member...]
Insert picture description here

7.key command

# 1.查看Redis中的全部key(pattern:*,xxx*,*xxx)
key pattern
# 2.查看走一个key是否存在(1——key存在,0——key不存在)
exists key
#3.删除key
del key [key...]
Insert picture description here


Insert picture description here

Common commands of the library

# 1.清空当前的数据库
flushdb
# 2.清空全部数据库
flushall
# 3.查看当前数据库中有多少个key
dbsize
# 4.查看最后一次操作时间
lastsave
# 5.实时监控Redis服务接收到的目录
monitor

Java connect to Redis

Jedis connects to Redis, Lettuce connects to Redis

1.Jedis connect to Redis

1. Create a maven project
2. Import the required dependencies
   <dependencies>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>2.9.0</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.13</version>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
        </dependency>

    </dependencies>
3. Test
public class Demo1 {
    @Test
    public void test(){
        //连接redis数据库
        Jedis jedis = new Jedis("192.168.100.18", 6379);
        //操作redis
        jedis.set("name","李四");
        String name = jedis.get("name");
        System.out.println(name);
        //释放资源
        jedis.close();
    }
}

2. How Jedis stores objects to redis in the form of byte arrays

1. Prepare an entity class
2. Dependency
<dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context</artifactId>
            <version>5.2.10.RELEASE</version>
        </dependency>

public class Demo2 {
    //以字节数组的形式存储在redis
    @Test
    public void setByteArray(){
        //1.连接redis
        Jedis jedis = new Jedis("192.168.100.18", 6379);
        //2.准备数据key-value
        String key="user";
        User user = new User(1, "xiaoming", 21);
        //3.转成btye[]数据
        byte[] byteUser = SerializationUtils.serialize(user);
        byte[] byteKey=SerializationUtils.serialize(key);
        //4.存储到redis
        jedis.set(byteKey,byteUser);

        //5.释放资源
        jedis.close();
    }

    //以字节数组在redis获取数据
    @Test
    public void getByteArray(){
        //1.连接redis
        Jedis jedis = new Jedis("192.168.100.18", 6379);
        //2.准备key
        String key="user";
        //3.转成btye[]数据
        byte[] byteKey=SerializationUtils.serialize(key);
        //4.redis获取数据
        byte[] value = jedis.get(byteKey);
        //5.反序列化数据
        User user = (User)SerializationUtils.deserialize(value);
        System.out.println(user);
        //5.释放资源
        jedis.close();
    }
}



3. How Jedis stores objects to redis in the form of String

<dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.47</version>
        </dependency>
public class Demo3 {
    //以string的形式存储在redis
    @Test
    public void setString(){
        //1.连接redis
        Jedis jedis = new Jedis("192.168.100.18", 6379);
        //2.准备数据key-value
        String key="user";
        User user = new User(1, "xiaoming", 21);
        //3.转成String数据
        String value = JSON.toJSONString(user);
        //4.存储到redis
        jedis.set(key,value);
        //5.释放资源
        jedis.close();
    }

    //以string在redis获取数据
    @Test
    public void getString(){
        //1.连接redis
        Jedis jedis = new Jedis("192.168.100.18", 6379);
        //2.准备key
        String key="user";
        //3.redis获取数据
        String stringValue = jedis.get(key);
        //5.反序列转成对象
        User user = JSON.parseObject(stringValue, User.class);
        System.out.println(user);
        //5.释放资源
        jedis.close();
    }
}

4. Jedis connection pool operation

 //简单创建
    @Test
    public void pool(){
        //1.创建连接池
        JedisPool pool = new JedisPool("192.168.100.18", 6379);
        //2.通过连接池获取jedis对象
        Jedis jedis = pool.getResource();
        //3.操作
        jedis.set("name","小明");
        String value = jedis.get("name");
        System.out.println(value);
        //4.释放资源
        jedis.close();
    }

    @Test
    public void pool2(){
        //1.配置连接池配置信息
        GenericObjectPoolConfig poolConfig=new GenericObjectPoolConfig();
        poolConfig.setMaxTotal(100);//连接池中最大的活跃数
        poolConfig.setMaxIdle(10);//最大空闲数
        poolConfig.setMinIdle(5);//最小空闲数
        poolConfig.setMaxWaitMillis(3000);//连接池空了,多久没获得jedis对象就超时
        //2.创建连接池
        JedisPool pool = new JedisPool(poolConfig,"192.168.100.18", 6379);
        //3.通过连接池获取jedis对象
        Jedis jedis = pool.getResource();
        //4.操作
        jedis.set("name","小明");
        String value = jedis.get("name");
        System.out.println(value);
        //5.释放资源
        jedis.close();
    }

5. Redis pipeline operation

When the client operates redis, redis commands need to be sent to the Redis server, which will be delayed by the network. At the same time, redis also needs to change the response to the client. If many commands are executed at one time, the efficiency will be very low. Put all the commands in the pipeline and send them to the redis server at one time, so the efficiency will be very high
  @Test
    public void pipeline(){

        //1.创建连接池
        JedisPool jedisPool = new JedisPool("192.168.100.18", 6379);

        //2.获取一个jedis的对象
        Jedis jedis = jedisPool.getResource();

        long l = System.currentTimeMillis();
        
        //3.创建管道,并放入管道
        Pipeline pipelined = jedis.pipelined();
        for (int i = 0; i < 100000; i++) {
            pipelined.incr("qq");
        }
        //4.执行命令
        pipelined.syncAndReturnAll();
        //5.释放资源
        jedis.close();

        System.out.println(System.currentTimeMillis()-l);
    }

Redis other configuration and cluster

version: '3.1'
services:
  redis:
    image: daocloud.io/library/redis:5.0.7
    restart: always
    container_name: redis
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 6379:6379
     

1. Redis AUCH

Method 1: By modifying the Redis configuration file. Implement Redis password verification
#redis.conf
requirepass password
Three client connection methods
  • 1. redis-cli: enter the auth password before entering the command
  • 2. Graphical interface: add verification password
  • 3. Jedis client:
    3.1: jedis.auth(password);
    3.2: Use JedisPool connection pool parameters plus password
Method 2: Without modifying the configuration file, when connecting to Redis for the first time, you can enter the command: config set requirepass password.

2.Redis transaction

The Redis transaction will put all the commands in the queue first, and once they are executed, they will all be executed. The commands inside are changed successfully, and the failed failures. If the transaction is canceled, all the commands in the queue are invalidated.
1. Open the transaction: multi
2. Enter the command to be executed: put into a queue
3. Execute the transaction: exec
4. Cancel the transaction: discard
For Redis to function as a transaction, it needs to configure a watch monitoring mechanism
  • Before the transaction is opened, first monitor one or more keys through watch. After the transaction is opened, if other clients operate the monitored key, the transaction will be automatically cancelled
  • If the transaction is executed or canceled, the watch will be automatically cancelled, and no additional unwatch is required

Redis.conf detailed

When starting, it is started through the configuration file

unit

1. The configuration file unit is not case sensitive

Insert picture description here
contain

2. Just like java import and jsp include

Insert picture description here
The internet
bind 127.0.0.1 #绑定ip
protected-mode yes #保护模式
port 6379 #端口设置
Universal
	daemonize yes  #以守护进程的方式进行,默认是no,我们需要开启为yes会在后台运行
	
	pidfile /var/run/redis.pid #如果以后台方式进行,我们就需要指定一个pid文件进程文件
	
	#日志
	# Specify the server verbosity level.
	# This can be one of:
	# debug (a lot of information, useful for development/testing)
	# verbose (many rarely useful info, but not a mess like the debug level)
	# notice (moderately verbose, what you want in production probably) #生产环境
	# warning (only very important / critical messages are logged)
	loglevel notice
	logfile ""  #日志文件位置名

	databases 16 #默认有16个数据库
	
	always-show-logo #是否总是展示logo
SNAPSHOTTING snapshot

Persistence, within the specified time, how many operations are performed will be persisted to .rdb .aof
redis is an in-memory database, if there is no persistence, then the data will be lost if power is off

 #如果900秒内,如果至少有1个key进行了修改,那么就持久化操作
save 900 1
 #如果300秒内,如果至少有10个key进行了修改,那么就持久化操作
save 300 10
 #如果60秒内,如果至少有10000个key进行了修改,那么就持久化操作
save 60 10000
#之后会自己定义自己的持久化设置

#持久化出错是否还需要工作
stop-writes-on-bgsave-error yes
#是否压缩rdb文件,需要消耗一些cpu资源
rdbcompression yes
#保存rdb文件时候,进行错误检验修复
rdbchecksum yes
#保存rdb文件名
dbfilename dump.rdb
#rdb保存的目录
dir ./
REPLICATION
Insert picture description here
slaveof <masterip> <masterport>
SECURITY

You can set a password here

Insert picture description here
limit
 #默认客户端最大1000连接数
 maxclients 10000
 
 #redis最大内存容量
 maxmemory <bytes>
 
 #内存达到上限之后的处理策略
 maxmemory-policy noeviction
1、volatile-lru:只对设置了过期时间的key进行LRU(默认值) 

2、allkeys-lru : 删除lru算法的key   

3、volatile-random:随机删除即将过期key   

4、allkeys-random:随机删除   

5、volatile-ttl : 删除即将过期的   

6、noeviction : 永不过期,返回错误
APPEND ONLY MODE aof configuration
#默认是不开启aof模式,默认是使用rdb方式持久化的,大部分情况下完全够用了
appendonly no 
#持久化文件的名字
appendfilename "appendonly.aof"

# appendfsync always #每次修改都会fsync,消耗性能
appendfsync everysec #每秒执行一次fsync,可能会丢失这1s的数据
# appendfsync no  #不执行sync,操作系统自己同步数据,速度最快,不等待磁盘同步,Redis不会主动调用fsync去将AOF日志内容同步到磁盘,所以这一切就完全依赖于操作系统的调试了。对大多数Linux操作系统,是每30秒进行一次fsync,将缓冲区中的数据写到磁盘上

#会进行重写
no-appendfsync-on-rewrite no
# 重写的条件,需要同时满足
auto-aof-rewrite-percentage 100 # 当前的aof增长量是旧的aof文件100
auto-aof-rewrite-min-size 64mb	#当前的aof文件至少达到64m

Redis persistence

Focus

Redis is an in-memory database. If you don’t save the in-memory database state to disk, once the server process exits, the database state in the server also disappears, so Redis provides persistence

1. RDB (Redis DataBase)

What is RDB

In the specified time, Redis will write the snapshot of the data set in the memory to the disk, that is, the snapshot, and its recovery is to read the snapshot file directly into the memory.

Redis will create a separate sub-thread to perform the persistence operation, write the data to a temporary file, and when the persistence ends, replace the last persistent file with this temporary file. The main thread does not participate in IO operations, so the performance is better. If large-scale data recovery is required and the data integrity is not sensitive, the RDB method will be more efficient than AOF. The disadvantage of RDB is that the data after the last persistence may be lost (downtime). RDB is used, and the configuration of RDB basically does not need to be modified.

Sometimes this file is backed up in a production environment

The file saved by rdb is dump.rdb

Trigger mechanism

1. When the save rules are met, the rdb rules will be automatically triggered, that is, the rules in the configuration file will create the dump.rdb file
2. Execute flushall, the rdb rule will be automatically triggered, and the dump.rdb file
will be created 3. Exit redis, it will be automatically triggered rdb rules create dump.rdb file,
backup will generate dump.rdb file

How to recover rdb files

1. Just place the dump.rdb file in the directory where redis is started, redis will check and restore the data to the memory
2. View the location where it needs to be stored

Insert picture description here


Advantages:
1. Suitable for large-scale data recovery
2. Data Low integrity requirements

Disadvantages:
1. It takes a certain interval of operations. If redis crashes unexpectedly, the data of the last modified operation will be lost.
2. When the fork process, it will take up a certain amount of memory space

2. AOF (Append Only File)

Record all our commands, which is equivalent to a history. When restoring, all commands in the file will be executed again

Insert picture description here


Record each write operation in the form of a log, and record all Redis instructions (read operations are not recorded). In order to avoid the same log from generating different data sets on different systems, only the post-operation As a result, the record is recorded through SET. Only files are appended but files cannot be rewritten. Redis will read the changed files at the beginning of the startup and rebuild the data, which is equivalent to re-executing a command from front to back in the order of recording commands to restore the data. record.

If the aof file is larger than 64m, it is too large, and a new process will be forked to rewrite our file

==The file saved by Aof is appendonly.aof file

It is not turned on by default, only manual configuration is required. We only need to change appendonly to yes to enable AOF, and restart redis to take effect.

If the aof file has errors such as misalignment, redis cannot be started at this time, and we need to repair this file, which will be used redis-check-aof --fix appendonly.aof

Trigger mechanism
  1. Manually trigger the execution of the bgrewriteaof command.
  2. Automatic trigger according to configuration

Advantages:
1. Synchronize every modification, the file integrity will be better
2. Synchronize once per second, the file will lose 1 second of data
3. Never synchronize, the highest efficiency

Disadvantages:
1. Compared with the data file, AOF is much larger than RDB, and the repaired file is also slower than
RDB 2. The running efficiency of AOF will also be slower than RDB, so redis uses RDB persistence by default.

Redis publish and subscribe

Redis publish and subscribe (pub/sub) is a message communication mode. The sender pub sends a message, and the subscriber sub receives the message.

Redis client can subscribe to any number of channels

Subscribe/publish message graph

Insert picture description here


Insert picture description here
Insert picture description here

Subscriber

Insert picture description here

Sender

Insert picture description here

Redis master-slave replication

1. Concept

Master-slave replication is to replicate the data of one redis server to other Redis servers. The former is called the master node (master/leader), and the latter is called (slave/follower); == Data is one-way, only Can go from the master node to the slave node == Master is mainly writing, Slave is mainly reading

By default, each Redis is the master node (when master-slave replication is not configured), And each master node can have multiple slave nodes, but a node has only one master node

2. The role of master replication

1. Data redundancy: Master-slave replication realizes hot backup of data, which is a data redundancy method besides persistence.
2. Failure recovery: When the master node fails, the slave node can take over from the master node to provide services to achieve failure Recovery; in fact, this is also a kind of service redundancy
3. Debt balance: On the basis of master-slave replication, with the separation of read and write, the master node can provide write services, and the slave nodes provide read services. In actual scenarios, 80% are read operations, and multiple slave nodes share the burden of reading, which can increase the concurrency of redis.
4. The cornerstone of high availability: master-slave replication is the basis for the implementation of sentinels and clusters. Therefore, master-slave replication is the highest level of Redis. Available base

The maximum memory used by a single redis server should not exceed 20G

Insert picture description here

3. Environment configuration

Only configure the slave node (slave library), do not need to configure the master node (master library)

127.0.0.1:6379> info replication #查看当前库的信息
# Replication
role:master   #角色 master
connected_slaves:0     #没有从机
master_replid:b8532be12b415b7e437c2006b828e02e3156bd5b
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

Copy 3 configuration files, and then modify the corresponding information
1. Port
2. Pid name
3. Log file name
4. Dump.rdb name

4. One master and two slaves

The default is the master node, We generally only need to configure the slave

Command modification (command modification will only be temporary, and it will be restored next time you restart)

slaveof host 端口 #认老大的操作

The real master-slave configuration should be configured in the configuration file, so that it is permanent

Insert picture description here
slaveof <masterip> <masterport>
detail

The master can write, but the slave cannot write but only read! All information and data in the master will be automatically saved by the slave

Test: The master is disconnected, and the slave is still connected to the master, but there is no write operation. If the master comes back at this time, the slave can still obtain the information written by the master
from the master. The slave is disconnected. If it is a command line setting, when It will be restored to the master after restarting, and the information of the original master cannot be obtained. If it becomes a slave again, the information of the original master will be immediately available

Copy principle

After the slave starts successfully and connects to the master, it will send a sync synchronization command. After the master receives the synchronization command, it will start the background save process, and at the same time collect all the data set commands for modification, in the background programThe master will transfer the entire data file to the slave and complete a complete synchronization

Full replication: After the slave service receives the data file data, it saves it and loads it into the memory.
Incremental replication: Master continues to pass all the new collected modification commands to the slave in turn to complete the synchronization, but as long as the master is reconnected, A complete synchronization (full copy) will be executed

Layer link

The previous M link and the next S

Insert picture description here


can complete our master-slave replication at this time

If the host is disconnected, we can use slave of no one to make ourselves the host, and other nodes can manually connect to the latest master node

5. Sentry mode

(Automatically select the mode of the boss)

The sentinel mode is a special mode. First, redis provides the command of the sentinel. The sentinel is a separate process. As a process, it will run independently. The principle is that the sentinel can monitor and run multiple Redis instances by sending a command and waiting for the response from the redis server. The

Insert picture description here


sentinel has two functions.

  • By sending commands, let the redis server return to monitor its running status, including the master server and the slave server
  • When the sentry detects that the master is down, it will automatically switch the slave to the master, and then passPublish and subscribe modelNotify other slave servers, modify the configuration file, and let them switch to the master

However, a sentinel may have problems monitoring the redis service, so multiple sentinels are also used for monitoring. Sentinels will also monitor each other, which will form a multi-sentinel mode.

Insert picture description here


Assuming that a host is down and sentinel 1 detects the result, the system will not immediately perform the failover process, that is, the re-election process, only sentinel 1 subjectively considers the main server Unavailable, this phenomenon is calledSubjective offline. When the following sentinels are also detected to be unavailable and the number reaches a certain value, then the sentinel will vote once, and the result of the vote is initiated by a sentinel to perform a failover operation. After the switch is successful, each sentinel will switch from the server to the host through the publish and subscribe mode. This process is calledObjective offline

test

Our current state is one master and two slaves
1. Configure sentinel configuration filesentinel.conf

#sentinel monitor 被监控的名称 host port 1
sentinel monitor myredis 127.0.0.1  6379  1

The number 1 at the back means that the host is down. The slave will vote to see who will take over as the host. The one with the most votes will become the host
. 2. Start the sentry
redis-sentinel sentinel.conf
If the master node is broken, a server will be randomly selected from the machine at this time (there is a voting algorithm here)

Insert picture description here
Sentinel mode

If the master machine comes back at this time, it can only be merged into the new master machine and used as a slave machine. This is the rule of sentinel mode

Advantages:
1. Sentinel cluster, based on the master-slave replication mode, has all the advantages of master-slave configuration.
2. The master-slave can be switched, the failure can be transferred, and the system availability will be better
. 3. The sentry mode is the upgrade of the master-slave mode, manual To automatic, more robust

Disadvantages:
1. Redis is not good for online expansion. Once the cluster capacity reaches the upper limit (16384 nodes), online expansion will be very troublesome
. 2. The configuration of sentinel mode is actually troublesome. There are many options.

All configurations of sentinel mode
# Example sentinel.conf
 
# 哨兵sentinel实例运行的端口 默认26379
port 26379
 
# 哨兵sentinel的工作目录
dir /tmp
 
# 哨兵sentinel监控的redis主节点的 ip port 
# master-name  可以自己命名的主节点名字 只能由字母A-z、数字0-9 、这三个字符".-_"组成。
# quorum 当这些quorum个数sentinel哨兵认为master主节点失联 那么这时 客观上认为主节点失联了
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
  sentinel monitor mymaster 127.0.0.1 6379 2
 
# 当在Redis实例中开启了requirepass foobared 授权密码 这样所有连接Redis实例的客户端都要提供密码
# 设置哨兵sentinel 连接主从的密码 注意必须为主从设置一样的验证密码
# sentinel auth-pass <master-name> <password>
sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
 
 
# 指定多少毫秒之后 主节点没有应答哨兵sentinel 此时 哨兵主观上认为主节点下线 默认30秒
# sentinel down-after-milliseconds <master-name> <milliseconds>
sentinel down-after-milliseconds mymaster 30000
 
# 这个配置项指定了在发生failover主备切换时最多可以有多少个slave同时对新的master进行 同步,
这个数字越小,完成failover所需的时间就越长,
但是如果这个数字越大,就意味着越 多的slave因为replication而不可用。
可以通过将这个值设为 1 来保证每次只有一个slave 处于不能处理命令请求的状态。
# sentinel parallel-syncs <master-name> <numslaves>
sentinel parallel-syncs mymaster 1
 
 
 
# 故障转移的超时时间 failover-timeout 可以用在以下这些方面: 
#1. 同一个sentinel对同一个master两次failover之间的间隔时间。
#2. 当一个slave从一个错误的master那里同步数据开始计算时间。直到slave被纠正为向正确的master那里同步数据时。
#3.当想要取消一个正在进行的failover所需要的时间。  
#4.当进行failover时,配置所有slaves指向新的master所需的最大时间。不过,即使过了这个超时,slaves依然会被正确配置为指向master,但是就不按parallel-syncs所配置的规则来了
# 默认三分钟
# sentinel failover-timeout <master-name> <milliseconds>
sentinel failover-timeout mymaster 180000
 
# SCRIPTS EXECUTION
 
#配置当某一事件发生时所需要执行的脚本,可以通过脚本来通知管理员,例如当系统运行不正常时发邮件通知相关人员。
#对于脚本的运行结果有以下规则:
#若脚本执行后返回1,那么该脚本稍后将会被再次执行,重复次数目前默认为10
#若脚本执行后返回2,或者比2更高的一个返回值,脚本将不会重复执行。
#如果脚本在执行过程中由于收到系统中断信号被终止了,则同返回值为1时的行为相同。
#一个脚本的最大执行时间为60s,如果超过这个时间,脚本将会被一个SIGKILL信号终止,之后重新执行。
 
#通知型脚本:当sentinel有任何警告级别的事件发生时(比如说redis实例的主观失效和客观失效等等),将会去调用这个脚本,这时这个脚本应该通过邮件,SMS等方式去通知系统管理员关于系统不正常运行的信息。调用该脚本时,将传给脚本两个参数,一个是事件的类型,一个是事件的描述。如果sentinel.conf配置文件中配置了这个脚本路径,那么必须保证这个脚本存在于这个路径,并且是可执行的,否则sentinel无法正常启动成功。
#通知脚本
# sentinel notification-script <master-name> <script-path>
  sentinel notification-script mymaster /var/redis/notify.sh
 
# 客户端重新配置主节点参数脚本
# 当一个master由于failover而发生改变时,这个脚本将会被调用,通知相关的客户端关于master地址已经发生改变的信息。
# 以下参数将会在调用脚本时传给脚本:
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# 目前<state>总是“failover”,
# <role>是“leader”或者“observer”中的一个。 
# 参数 from-ip, from-port, to-ip, to-port是用来和旧的master和新的master(即旧的slave)通信的
# 这个脚本应该是通用的,能被多次调用,不是针对性的。
# sentinel client-reconfig-script <master-name> <script-path>
 sentinel client-reconfig-script mymaster /var/redis/reconfig.sh

Redis cache penetration and avalanche (high frequency in interviews, commonly used in work)

The use of Redis cache greatly improves the performance and efficiency of the application, especially in terms of data query. But at the same time it will also bring some problems. The most critical one is the problem of data consistency. Strictly speaking, this problem is unsolvable. If the data consistency requirements are high, caching cannot be used.
Some other classic problems are cache penetration, cache avalanche, and cache breakdown. At present, the industry also has more popular solutions

Insert picture description here

1. Cache penetration

concept

The concept of cache penetration is very simple. The user wants to query a piece of data and finds that there is no data in the redis memory database, that is, the cache is not hit, so the user queries the persistence layer database. Nothing was found, so this query failed. When the number of users is large, the cache is not hit (second kill), so the persistence layer database query, which will cause a lot of pressure, at this time is equivalent to cache penetration

Bloom filter

Bloom filter is a data structure that stores all possible query parameters in a hash form. It is verified at the control layer first, and discarded if they do not match, thus avoiding the query pressure on the underlying storage system

Insert picture description here
Cache empty objects
When the storage layer misses, even if it returns an empty object, it will be cached, and an expiration time will be set. After that, the data will be retrieved from the cache when the data is accessed, which protects the back-end data source
Insert picture description here


However, there are two problems with this method:
1. The null value can be cached, which means that more space may be needed to store the key
. 2. Even if the expiration time is set, there will still be a window of time for the data in the cache layer and the storage layer. Inconsistency, which will have an impact on businesses that need to maintain consistency

2. Cache breakdown

Overview

Cache breakdown refers to a key that is very hot, constantly carrying large concurrency, and the large concurrency concentrates on accessing this point. When the key is invalidated, the continuous large concurrency will break through the cache and directly request the database , It's like digging a hole in a barrier.

When a key expires, there are a large number of concurrent access requests. This column of data is generally hot data. Because the cache expires, the data will be accessed at the same time to query the latest data, and the cache will be written back, which will cause the database to be overloaded in an instant.

solution
  • Set hotspot data to never expire.
    From the caching perspective, there is no expiration time set, so there will be no problems after hotspots expire
  • Add mutex locks
    Distributed locks: Use distributed locks to ensure that there is only one thread to query the back-end service for each key at the same time, and other threads do not have the authority to obtain the distributed lock, so just wait. This method transfers the pressure of high concurrency to distributed locks, so it is a great test for distributed locks
Insert picture description here

3. Cache avalanche

concept

Cache avalanche refers to the collective expiration of the cache or Redis downtime in a certain period of time

One of the reasons for the avalanche is, for example, goods will be snapped up at 12 o’clock on Double Twelve. These goods will be placed in the cache more concentratedly. It takes 1 hour to join the cache. At 1 o’clock, these caches will expire, and then panic buying will Direct access to the database storage layer may cause a crash

Insert picture description here


In fact, centralized expiration is not the most fatal. What is more fatal is that a node of the cache server is down or the network is disconnected. Because of the natural avalanche, the cache must be created at a certain concentrated time. At this time, the database can still withstand the pressure. It is nothing more than periodic pressure on the database. The server in the cache will be powered off. Cause unpredictable situations, and may even overwhelm the database

solution

Redis high availability

The meaning of this idea is that since redis is down, add a few more redis servers, which is actually to build a cluster

Current limit downgrade (explained by springcloud)

The idea of ​​this solution is to control the number of threads that read the database write cache by locking or queueing after the cache is invalid. For example, only one thread is allowed to query data and write cache for a key, and other threads wait

Data warm-up

The meaning of data heating is that before the official deployment, I first visit the possible data first, so that part of the data that may be accessed in a large amount will be loaded into the memory, and manually trigger the loading and caching of different keys before the large concurrent access is about to occur. , Set different expiration time, let the time point of cache invalidation balance point