Redis learning summary (Docker build environment)

Chinese version of command learning address (some parameters are not explained in detail): http://www.redis.cn/commands.html

English version of command learning address: https://redis.io/commands

Article Directory

1. Installation

Use Docker to install and use Redis this time

[email protected]:~# docker pull redis  # 下载redis镜像
[email protected]:~# docker run -p 6379:6379 -d redis  # 启动redis容器。后台方式运行,并映射容器和主机的端口
[email protected]:~# docker ps  # 查看所有正在运行的容器
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                    NAMES
b4a21afd7dc8   redis          "docker-entrypoint.s…"   32 seconds ago   Up 31 seconds   0.0.0.0:6379->6379/tcp   gracious_poincare
[email protected]:~# docker exec -it b4a21afd7dc8 /bin/bash  # 启动容器

After entering the container, perform the following operations

Since there is no redis.conf configuration file in the way of using Docker, we need to get the configuration file
[email protected]:/data# apt update  # 更新源
[email protected]:/data# apt install -y vim  # 安装vim编辑器
[email protected]:/data# apt install -y wget  # 安装wget工具
[email protected]:/data# cd /usr/local/bin  # 进入到redis的安装目录下
[email protected]:/usr/local/bin# ls  # 查看文件
docker-entrypoint.sh  gosu  redis-benchmark  redis-check-aof  redis-check-rdb  redis-cli  redis-sentinel  redis-server
[email protected]:/usr/local/bin# redis-server --version  # 查看版本,然后到官网找到对应的版本下载
Redis server v=6.2.3 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=dc20d908b7b619b4
[email protected]:/usr/local/bin# wget https://download.redis.io/releases/redis-6.2.3.tar.gz
[email protected]:/usr/local/bin# tar -zxvf redis-6.2.3.tar.gz  # 解压文件
[email protected]:/usr/local/bin# cp ./redis-6.2.3/redis.conf /usr/local/bin/  # 拷贝里面的配置文件
[email protected]:/usr/local/bin# rm -rf redis-6.2.3  # 删除文件夹
[email protected]:/usr/local/bin# cp redis.conf redis.conf.bak  # 备份一份配置文件
[email protected]:/usr/local/bin# ls
docker-entrypoint.sh  gosu  redis-benchmark  redis-check-aof  redis-check-rdb  redis-cli  redis-sentinel  redis-server  redis.conf  redis.conf.bak

In order to facilitate subsequent operations, some operating system operation and maintenance tools must be installed

[email protected]:/data# apt install -y iproute2  # 安装ip查看工具
[email protected]:/data# apt install -y procps # 安装ps查看进程工具

Two, start, close

1. Start

Redis runs interactively by default, but we want it to run in the background, so we need to modify the configuration file

[email protected]:/usr/local/bin# vim redis.conf

There are about 257 lines in it, change it daemonize notodaemonize yes , and then start it and it will run normally in the background

Use the redis service tool to redis.confstart the service through the specified configuration file

[email protected]:/usr/local/bin# redis-server redis.conf

2. Check if the startup is successful

The first way. Use the redis-cli client tool to check whether the redis service has started successfully. -p parameter specifies the connection port

[email protected]:/usr/local/bin# redis-cli -p 6379
Insert picture description here

The second way. Since we used port mapping when we started the container, we can view the process information about the redis service under the host. Of course, we prefer to view it in the container. Therefore, we need to start the redis container again under the host.

Insert picture description here


Unfortunately, there is no ps process viewing tool in the docker environment, so we view under the host.

xxx
I must check what to do in the container, yes, install the ps tool in the redis container
[email protected]:/data# apt install -y procps

3. Basic use

setkey value , used to set key-value pairs of string type

getkey , used to get the value of a key

keyspattern , pattern is a regular expression, used to obtain certain keys

xxx

4. Login password

The default is empty, you can add a password

127.0.0.1:6379> config get requirepass  # 获取密码,为空
1) "requirepass"
2) ""
127.0.0.1:6379> config set requirepass "123456"  # 设置密码123456
OK

If we are still in the redis-cli client, we can continue to execute the command, but after we exit and re-enter, we will not have the authority to execute the command.

127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> set name zhong
(error) NOAUTH Authentication required.

You can use the authcommand to log in

127.0.0.1:6379> auth 123456OK127.0.0.1:6379> pingPONG

5. Configuration commands

After using redis-cli to connect, you can use the config get xxxkeyobtained configuration value, and then you can use the config set xxxkey xxxvaluemodified value

Already used when changing the password above

6. Turn off the redis service

shutdown[NOSAVE|SAVE] , when closing, you can choose whether to save the data of this operation

sss

Compile and install redis on a normal host. If the redis service is closed, only not connected will be displayed after the shutdown command is executed.

Note that using Docker to run the Redis service, if you close the service, it will automatically close and exit the container

Start the redis container again

xxx

Start the Redis service, the same as before

[email protected]:/data# redis-server /usr/local/bin/redis.conf

Three, redis-benchmark tool

redis-benchmark is a performance testing tool to see how much concurrency the host can resist

The optional parameters are as follows:

Serial numberOptionsdescriptionDefaults
1-hSpecify the server host name127.0.0.1
2-pSpecify server port6379
3-sSpecify server socket
4-cSpecify the number of concurrent connections50
5-nSpecify the number of requests10000
6-dSpecify the data size of the SET/GET value in bytes3
7-k1=keep alive 0=reconnect1
8-rSET/GET/INCR uses random keys, SADD uses random values
9-PPipe <numreq> requests1
10-qForce redis to exit. Only display query/sec value
11–CsvOutput in CSV format
12* -l* (lowercase letter of L)Generate loops and execute tests forever
13-tOnly run a comma-separated list of test commands.
14-/ (capital letter of i)Idle mode. Only open N idle connections and wait.
[email protected]:/usr/local/bin# redis-benchmark -h localhost -p 6379 -c 100 -n 100000

After the test tool starts running, it will call almost all commands for testing. The approximate content of each test is as follows, take the set request as an example

xxx

Fourth, the basic concepts of Redis

There are 16 databases by default, and there are records on line 327 in the redis.conf file databases 16, which can be modified. The default is0数据库

The default port is 6379, which is about line 98 port 6379, which can be modified.

Single-threaded operation. Data is all operated in memory, no disk I/O operations will occur. In contrast, multi-thread switching takes longer

1. Switch the database

select index

xxx

2. View database capacity

dbsize

xxx

3. Delete all keys of the currently selected database

flushdb [ASYNC|SYNC]

  • ASYNC: Refresh the database asynchronously
  • SYNC: Refresh the database synchronously
xxx

flushall[ASYNC|SYNC] , delete all database keys

Five, 5 basic data types

A) Auxiliary commands

1. Determine whether a key exists

exists key [key …]

xxx

Multiple keys can be connected, the result is as many as there are

2. Move keys to other databases

move key dbIndex

Move a key from the current database to a database. If the target database already exists or the source database does not exist, no operation is performed. The return value is 1 for success, and 0 for failure.

xxx

3. Set the expiration time of the key

expire key seconds

xxx

4. View the remaining survival time of the key

ttl key

xxx

5. View the type of key

type key

127.0.0.1:6379> set name zhongOK127.0.0.1:6379> type namestring


Two) string (string)

1. String splicing

append key value

Append a string to the end and return the length of the new string. If the key does not exist, create the key, the value is an empty string, and then perform the append operation

xxx

2. Get the length of the string

strlen key

127.0.0.1:6379> strlen say(integer) 10127.0.0.1:6379> get say"helloworld"

3. The string is similar to the number plus +1

incr key

xxx

4. Strings are similar to numbers-1 operation

decr key

127.0.0.1:6379> set id 0OK127.0.0.1:6379> decr id(integer) -1127.0.0.1:6379> decr id(integer) -2127.0.0.1:6379> get id"-2"

5. The string is similar to the number + or-number n operation

incrby key increment

decrby key increment

xxx

Incrby can also do the function of decrby

127.0.0.1:6379> set test 2
OK
127.0.0.1:6379> incrby test 10
(integer) 12
127.0.0.1:6379> incrby test -8
(integer) 4

6, intercept the string

getrange key start end

127.0.0.1:6379> get say
"helloworld"
127.0.0.1:6379> getrange say 3 5
"low"
127.0.0.1:6379> getrange say 0 -1
"helloworld"

7. Replace the string from a certain position

setrange key offset value

127.0.0.1:6379> set say helloworldOK127.0.0.1:6379> setrange say 6 niubi(integer) 11127.0.0.1:6379> get say"hellowniubi"

8. If the key exists, modify the value and modify the expiration time

setex key seconds value

If the key does not exist, create a new key, then set the value, and then set the expiration time

127.0.0.1:6379> set name zhongOK127.0.0.1:6379> setex name 10 "xiao"OK127.0.0.1:6379> get name"xiao"127.0.0.1:6379> ttl name(integer) 4127.0.0.1:6379> get name(nil)

9. If the key does not exist, create the key-value pair (commonly used in distributed locks)

setnx key value

Do nothing if it exists

########不存在就创建######127.0.0.1:6379> get name(nil)127.0.0.1:6379> setex name 10 "xiao"OK127.0.0.1:6379> get name"xiao"########存在不做操作#########127.0.0.1:6379> set name zhongOK127.0.0.1:6379> setnx name xiao(integer) 0127.0.0.1:6379> get name"zhong"

10. Create multiple key-value pairs at the same time and get multiple key-value pairs

mset key value [key value …]

If the same key is set, take the last one

127.0.0.1:6379> keys *(empty array)127.0.0.1:6379> mset name zhong age 10 OK127.0.0.1:6379> keys *1) "age"2) "name"

mget key [key …]

127.0.0.1:6379> mget name age1) "zhong"2) "10"

11. Create multiple key-value pairs if the key does not exist

msetnx key value [key value …]

Atomic operation, one setting is unsuccessful, all settings are unsuccessful

127.0.0.1:6379> set name zhongOK127.0.0.1:6379> msetnx name xiao age 10(integer) 0127.0.0.1:6379> keys *1) "name"127.0.0.1:6379> get name"zhong"

12. Get the value first, then set the value

getset key value

Get the value first, if it is empty, output nil, if it is not empty, output the original value, and finally set the value

127.0.0.1:6379> getset say hello(nil)127.0.0.1:6379> getset say world"hello"127.0.0.1:6379> get say"world"

Thinking about designing key-value pairs

There is no json format type in reids, if you use string type, how to design

  1. Save it after serialization into a string on the server, and deserialize it after the client gets the data
set user:1 {name:zhong,age:18}
  1. Put the key in json into the key of redis, but in this case, it is better to use hash to operate conveniently
mset user:1:name zhong user:1:age 18​

Three) list (list)

Put the value in the list, you can also use it as a stack or queue.

1. Put the value in the list, from the left

lpush key element [element …]

127.0.0.1:6379> lpush age 10 20 30 40(integer) 4127.0.0.1:6379> lpush age 50(integer) 5127.0.0.1:6379> keys *1) "age"

2. Get the value of the list

lrange key start stop

127.0.0.1:6379> lrange age 0 -1  # 取出所有值1) "50"2) "40"3) "30"4) "20"5) "10"127.0.0.1:6379> lrange age 0 21) "50"2) "40"3) "30"

3. Put the value in the list, from the right

rpush key element [element …]

127.0.0.1:6379> rpush age 10 20 30 40(integer) 4127.0.0.1:6379> rpush age 50(integer) 5127.0.0.1:6379> lrange age 0 -11) "10"2) "20"3) "30"4) "40"5) "50"

4. Shift n values ​​from the left

lpop key [count]

127.0.0.1:6379> lrange age 0 -11) "10"2) "20"3) "30"4) "niu"5) "bi"# 从左边移除2个值127.0.0.1:6379> lpop age 21) "10"2) "20"127.0.0.1:6379> lrange age 0 -11) "30"2) "niu"3) "bi"

5. Shift n values ​​from the right

rpop key [count]

127.0.0.1:6379> lrange age 0 -11) "30"2) "niu"3) "bi"# 从右边移除1个值127.0.0.1:6379> rpop age 11) "bi"127.0.0.1:6379> lrange age 0 -11) "30"2) "niu"

6. Get the value corresponding to an index

lindex key index

127.0.0.1:6379> lrange age 0 -11) "10"2) "20"3) "30"4) "40"127.0.0.1:6379> lindex age 1"20"127.0.0.1:6379> lindex age -2"30"127.0.0.1:6379> lrange age 0 -1  # 内容不变,只是读取1) "10"2) "20"3) "30"4) "40"

7. View the length of the list

llen key

127.0.0.1:6379> llen age(integer) 4127.0.0.1:6379> lrange age 0 -11) "10"2) "20"3) "30"4) "40"

8. Remove the value in the list

lrem key count element

The list can have multiple identical values, count specifies how many identical values ​​to remove, counting from the left

127.0.0.1:6379> lrange age 0 -11) "40"2) "50"3) "10"4) "20"5) "30"6) "40"127.0.0.1:6379> lrem age 2 40(integer) 2127.0.0.1:6379> lrange age 0 -11) "50"2) "10"3) "20"4) "30"

9. Intercept some values ​​of the list, and the remaining values ​​will be removed from the list

ltrim key start stop

127.0.0.1:6379> lrange age 0 -1
1) "10"
2) "20"
3) "30"
4) "40"
5) "50"
127.0.0.1:6379> ltrim age 1 3  # 只留下了1、2、3下标的值
OK
127.0.0.1:6379> lrange age 0 -1
1) "20"
2) "30"
3) "40"

10. Remove the rightmost value of one list to the leftmost value of another list

rpoplpush source destination

If the target list does not exist, create, if the source list does not exist, return nil

127.0.0.1:6379> lrange age 0 -1
1) "10"
2) "20"
3) "30"
4) "40"
5) "50"
127.0.0.1:6379> rpoplpush age id
"50"
127.0.0.1:6379> lrange age 0 -1
1) "10"
2) "20"
3) "30"
4) "40"
127.0.0.1:6379> lrange id 0 -1
1) "50"

11. Modify the value corresponding to an index

lset key index element

Report an error if the key does not exist

127.0.0.1:6379> lrange age 0 -11) "10"2) "20"3) "30"4) "40"127.0.0.1:6379> lset age 1 15OK127.0.0.1:6379> lrange age 0 -11) "10"2) "15"3) "30"4) "40"

12. Insert a value before or after an element

linsert key BEFORE|AFTER pivot element

Before stands for the front, after stands for the back, and pivot stands for the value to be found

127.0.0.1:6379> lrange age 0 -11) "10"2) "15"3) "30"4) "40"127.0.0.1:6379> linsert age after "30" 35  # 在元素30的后面插入值35(integer) 5127.0.0.1:6379> lrange age 0 -11) "10"2) "15"3) "30"4) "35"5) "40"

List thinking

In principle, it is a linked list. If the elements in it are removed and empty, the linked list does not exist, so the list does not exist.

The efficiency of changing the value on the left and right sides of the linked list is the highest. If you want to manipulate the middle value, you need to traverse the efficiency poorly.

Four) set

Unordered non-repeating collection

1. Add members

sadd key member [member …]

127.0.0.1:6379> sadd age 10 20 30 10(integer) 3127.0.0.1:6379> sadd age 40(integer) 1

2. Output all members

smembers key

127.0.0.1:6379> smembers age1) "10"2) "20"3) "30"4) "40"

3. Determine whether a member exists

sismember key member

127.0.0.1:6379> sismember age 20  # 存在(integer) 1127.0.0.1:6379> sismember age 60  # 不存在(integer) 0

4. Get the number of members of the set

scard key

127.0.0.1:6379> smembers age1) "10"2) "20"3) "30"4) "40"127.0.0.1:6379> scard age(integer) 4

5. Remove some members

srem key member [member …]

127.0.0.1:6379> smembers age1) "10"2) "20"3) "30"4) "40"127.0.0.1:6379> srem age 20 40(integer) 2127.0.0.1:6379> smembers age1) "10"2) "30"

6. Get n members randomly

srandmember key [count]

127.0.0.1:6379> srandmember age"40"127.0.0.1:6379> srandmember age 21) "30"2) "10"127.0.0.1:6379> srandmember age 21) "20"2) "40"127.0.0.1:6379> srandmember age 1"10"

7. Randomly remove n elements

spop key [count]

127.0.0.1:6379> smembers age1) "10"2) "20"3) "30"4) "40"5) "50"6) "60"127.0.0.1:6379> spop age 21) "50"2) "20"127.0.0.1:6379> smembers age1) "10"2) "30"3) "40"4) "60"

8. Move a member of a collection to another collection

smove source destination member

If the target collection does not exist, create, if the source collection does not exist, do nothing

127.0.0.1:6379> smembers age1) "10"2) "30"3) "40"4) "60"127.0.0.1:6379> smembers id(empty array)127.0.0.1:6379> smove age id 10(integer) 1127.0.0.1:6379> smembers age1) "30"2) "40"3) "60"127.0.0.1:6379> smembers id1) "10"

9. Difference of n sets

sdiff key [key …]

Use the first set as a reference for comparison

127.0.0.1:6379> sadd age 10 20 30 40 50 60(integer) 6127.0.0.1:6379> sadd id 15 20 35 45 51(integer) 5127.0.0.1:6379> sdiff age id1) "10"2) "30"3) "40"4) "50"5) "60"

10. Intersection of n sets

sinter key [key …]

Use the first set as a reference for comparison

127.0.0.1:6379> sinter age id1) "20"

11. Union of n sets

sunion key [key …]

127.0.0.1:6379> sunion age id 1) "10" 2) "15" 3) "20" 4) "30" 5) "35" 6) "40" 7) "45" 8) "50" 9) "51"10) "60"

12. Sort

sort key [BY pattern] [LIMIT offset count] [GET pattern] [ASC|DESC] [ALPHA] destination

Can sort collections, lists, and ordered collections

127.0.0.1:6379> smembers age  # 打印所有成员1) "10"2) "18"3) "30"4) "42"5) "50"127.0.0.1:6379> sort age  # 降序排序1) "10"2) "18"3) "30"4) "42"5) "50"127.0.0.1:6379> sort age desc  # 添加desc参数升序排序1) "50"2) "42"3) "30"4) "18"5) "10"

Five) hash (hash)

Hash is actually a collection of maps, and also a key-value, which is equivalent to adding a layer of key. It is not much different from string.

1. Add hash

hset key field value [field value …]

If the map already exists, it will fail if you modify it

127.0.0.1:6379> hset user age 18 name zhong hight 2.0
(integer) 3
127.0.0.1:6379> hset user weight 100
(integer) 1
127.0.0.1:6379> hset user weight 110
(integer) 0

2. Get the value in the hash

hget key field

127.0.0.1:6379> hget user age
"18"
127.0.0.1:6379> hget user name
"zhong"
127.0.0.1:6379> hget user hight
"2.0"
127.0.0.1:6379> hget user weight
"100"

3. Set the value of multiple fields of hash

hmset key field value [field value …]

It looks the same as hset, but he can modify the existing value of the map

127.0.0.1:6379> hget user weight"100"127.0.0.1:6379> hmset user weight 110OK127.0.0.1:6379> hget user weight"110"

4. Get multiple hash values

hmget key field [field …]

hget can only get one map value, this can get multiple map values

127.0.0.1:6379> hmget user age name weight1) "18"2) "zhong"3) "110"

5. Get all hash values

hgetall key

The map is displayed in the form of key-value pairs

127.0.0.1:6379> hgetall user1) "age"2) "18"3) "name"4) "zhong"5) "hight"6) "2.0"7) "weight"8) "110"

6. Delete multiple fields in the hash

hdel key field [field …]

127.0.0.1:6379> hdel user hight weight(integer) 2127.0.0.1:6379> hgetall user1) "age"2) "18"3) "name"4) "zhong"

7. Check the length of the hash

hlen key

127.0.0.1:6379> hlen user  # 只有2个(integer) 2127.0.0.1:6379> hset user height 2.0  # 再添加一个map(integer) 1127.0.0.1:6379> hlen user  # 有3个了(integer) 3

8. Determine whether a field in the hash exists

hexists key field

127.0.0.1:6379> hexists user age  # 存在(integer) 1127.0.0.1:6379> hexists user ages  # 不存在(integer) 0

9. Get all the fields in the hash

hkeys key

127.0.0.1:6379> hkeys user1) "age"2) "name"3) "height"

10. Integer operation on a field of hash + n

hincrby key field increment

127.0.0.1:6379> hget user age"18"127.0.0.1:6379> hincrby user age 5(integer) 23127.0.0.1:6379> hincrby user age -10(integer) 13

11. Floating point operation on a field of hash + n

hincrbyfloat key field increment

127.0.0.1:6379> hget user height"2.0"127.0.0.1:6379> hincrbyfloat user height 5.11"7.11"

12. Create if it doesn't exist

hsetnx key field value

127.0.0.1:6379> hget user age"13"127.0.0.1:6379> hsetnx user age 22  # 已存在,创建失败(integer) 0127.0.0.1:6379> hsetnx user agess 22  # 不存在,创建成功(integer) 1127.0.0.1:6379> hget user age"13"127.0.0.1:6379> hget user agess"22"

6) zset (ordered set)

On the basis of the set, a weight is added to the value of the set, and the weight is used to sort

1. Add members

zadd key [NX|XX] [GT|LT] [CH] [INCR] score member [sco

127.0.0.1:6379> zadd english 1 98(integer) 1127.0.0.1:6379> zadd english 3 80(integer) 1127.0.0.1:6379> zadd english 2 85(integer) 1127.0.0.1:6379> zadd english 4 75 6 60  # 添加多个

2. Get all members

zrange key min max [BYSCORE|BYLEX] [REV] [LIMIT offset count] [WITHSCORES]

withscores represents output with weights

127.0.0.1:6379> zrange english 0 -11) "98"2) "85"3) "80"4) "75"5) "60"

3. Get members in the range by weight

zrangebyscore key min max [WITHSCORES] [LIMIT offset count]

min represents the smallest weight, -inf represents infinitesimal, max represents the largest weight, and +inf represents infinity. The withscores parameter represents the output together with the weight. After adding limit, offset represents the sorting of members in the interval from that position to the offset+count position.

127.0.0.1:6379> zrangebyscore english -inf +inf1) "98"2) "85"3) "80"4) "75"5) "60"127.0.0.1:6379> zrangebyscore english -inf +inf withscores 1) "98" 2) "1" 3) "85" 4) "2" 5) "80" 6) "3" 7) "75" 8) "4" 9) "60"10) "6"127.0.0.1:6379> zrangebyscore english -inf +inf limit 2 21) "80"2) "75"

4. Remove n members from the set

zrem key member [member …]

127.0.0.1:6379> zrange english 0 -11) "98"2) "85"3) "80"4) "75"5) "60"127.0.0.1:6379> zrem english 98 60(integer) 2127.0.0.1:6379> zrange english 0 -11) "85"2) "80"3) "75"

5. Get the number of members in the set

zcard key

127.0.0.1:6379> zrange english 0 -11) "85"2) "80"3) "75"127.0.0.1:6379> zcard english(integer) 3

6. Get the number of members in a certain interval of the collection

zcount key min max

127.0.0.1:6379> zrange english 0 -1 withscores
1) "85"
2) "2"
3) "80"
4) "3"
5) "75"
6) "4"
127.0.0.1:6379> zcount english 1 3
(integer) 2
127.0.0.1:6379> zcount english 1 4
(integer) 3

7. Modify the weight of a member

zincrby key increment member

Increment represents how much the weight is added to the source value, and member represents the value of the member. If the member does not exist, add the member and use the set weight

127.0.0.1:6379> zrange english 0 -1 withscores
1) "85"
2) "2"
3) "80"
4) "3"
5) "75"
6) "4"
127.0.0.1:6379> zincrby english 5 "85"
"7"
127.0.0.1:6379> zrange english 0 -1 withscores
1) "80"
2) "3"
3) "75"
4) "4"
5) "85"
6) "7"
127.0.0.1:6379> zincrby english -5 "85"
"2"

Six, three special data types

A) geospatial (geographical location)

The underlying principle of geo is an ordered collection of zset, so you can use zset commands to operate geo

Get the longitude and latitude of the address online URL 1: https://www.qvdv.com/tools/qvdv-coordinate.html

Get the longitude and latitude of the address online URL 2: http://www.daquan.la/jingwei/

1. Add geographic location

geoadd key [NX|XX] [CH] longitude latitude member [longitude latitude member …]

longitude stands for longitude, latitude stands for latitude, member stands for member name

127.0.0.1:6379> geoadd city 121.403484 31.256177 shanghaishi(integer) 1127.0.0.1:6379> geoadd city 116.413384 39.910925 beijingshi(integer) 1

2. Get member's location information

geopos key member [member …]

127.0.0.1:6379> geopos city beijingshi shanghaishi  # 获取北京市和上海市的地理位置1) 1) "116.41338318586349487"   2) "39.9109247398676743"2) 1) "121.40348285436630249"   2) "31.25617763989148301"

3. Get the distance between two locations

geodist key member1 member2 [m|km|ft|mi]

m stands for meters, km stands for kilometers, ft stands for feet, mi stands for miles

127.0.0.1:6379> geodist city beijingshi shanghaishi km  # 北京市和上海市的距离"1062.7281"

4. Get members within a certain location

GEORADIUS key longitude latitude radius m|km|ft|mi [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count]

The member of key within the range of longitude and latitude as the center and radius as the radius.

withdist represents the distance of each member from the center at the same time,

The unit remains the same. Withcoord represents the geographic location of specific members at the same time.

The count parameter plus a specific value means that only n results are output, and the shortest is preferred

127.0.0.1:6379> geoadd guangshang 113.575051 23.307731 tushuguan  # 广商图书馆(integer) 1127.0.0.1:6379> geoadd guangshang 113.574027 23.312377 jingbeizi  # 金贝子(integer) 1127.0.0.1:6379> geoadd guangshang 113.561181 23.271831 junhexiaoxue  # 均和小学(integer) 1127.0.0.1:6379> geoadd guangshang 113.571134 23.313091 tangcunditie  # 汤村地铁(integer) 1127.0.0.1:6379> geoadd guangshang 113.575356 23.309656 youzheng  # 邮政(integer) 1# 经纬度是第一教学楼,对guangshang位置内计算,在第一教学楼300米范围内的成员127.0.0.1:6379> georadius guangshang 113.576129 23.308569 300 m1) "youzheng"2) "tushuguan"127.0.0.1:6379> georadius guangshang 113.576129 23.308569 500 m  # 500米范围内的成员1) "youzheng"2) "jingbeizi"3) "tushuguan"127.0.0.1:6379> georadius guangshang 113.576129 23.308569 500 m withdist count 21) 1) "tushuguan"   2) "144.1020"2) 1) "youzheng"   2) "144.6107"

5. Get members within a certain range of a member

georadiusbymember key member radius m|km|ft|mi [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count]

127.0.0.1:6379> georadiusbymember guangshang tushuguan 500 m1) "tushuguan"2) "youzheng"127.0.0.1:6379> georadiusbymember guangshang tushuguan 800 m1) "tushuguan"2) "tangcunditie"3) "youzheng"4) "jingbeizi"

6. Use zset commands to operate geo

Geo is essentially a zset, use the command of zset to operate get

127.0.0.1:6379> zrange guangshang 0 -11) "junhexiaoxue"2) "tushuguan"3) "tangcunditie"4) "youzheng"5) "jingbeizi"127.0.0.1:6379> zrem guangshang junhexiaoxue tangcunditie  # 删除2个成员(integer) 2127.0.0.1:6379> zrange guangshang 0 -11) "tushuguan"2) "youzheng"3) "jingbeizi"

Two) hyperloglog

The algorithm of cardinality statistics, the number of elements that are not repeated, and the error can be accepted

It is often used to count the number of visits to a web page. The traditional method is to use the set collection to save the user's ID, and then count the number of elements in the collection. However, for distributed IDs, the longer it takes up space and the waste of resources, it is necessary to solve this problem.

The memory used by hyperloglog is fixed, occupying 12KB of memory, but the error rate is 0.81%, but the error rate is acceptable.

1. Add elements

pfadd key element [element …]

127.0.0.1:6379> pfadd shouye a s d f g h j k l(integer) 1

2. Statistics

pfcount key [key …]

127.0.0.1:6379> pfadd shouye a s d f g h j k l(integer) 1127.0.0.1:6379> pfcount shouye(integer) 9127.0.0.1:6379> pfadd golang z x c v b n m(integer) 1127.0.0.1:6379> pfcount shouye golang(integer) 16127.0.0.1:6379> pfcount golang

3. Combine multiple keys

pfmerge destkey sourcekey [sourcekey …]

destkey represents the new result set, sourcekey represents the source data set

127.0.0.1:6379> pfadd c++ a s d f g h j k l(integer) 1127.0.0.1:6379> pfadd python a s c b g h j n l(integer) 1127.0.0.1:6379> pfcount c++(integer) 9127.0.0.1:6379> pfcount python(integer) 9127.0.0.1:6379> pfcount c++ python(integer) 12127.0.0.1:6379> pfmerge yuyang c++ pythonOK127.0.0.1:6379> pfcount yuyang(integer) 12

Three) bitmaps

Bit operation storage. There are only two states, 0 and 1. Structurally, a key can have many bits, each bit is operated separately, and the index is used to operate

1. Add the value of the bit

setbit key offset value

127.0.0.1:6379> setbit qiandao 0 1  # 星期一签到(integer) 0127.0.0.1:6379> setbit qiandao 1 1  # 星期二签到(integer) 0127.0.0.1:6379> setbit qiandao 2 1  # 星期三签到(integer) 0127.0.0.1:6379> setbit qiandao 3 0  # 星期四没签到(integer) 0127.0.0.1:6379> setbit qiandao 4 1  # 星期五签到(integer) 0127.0.0.1:6379> setbit qiandao 5 1  # 星期六签到(integer) 0127.0.0.1:6379> setbit qiandao 6 0  # 星期日没签到(integer) 0

2. Get the value of a certain location

getbit key offset

127.0.0.1:6379> getbit qiandao 1  # 星期二的值为1,代表已签到(integer) 1127.0.0.1:6379> getbit qiandao 3  # 星期四的值为0,代表没签到(integer) 0127.0.0.1:6379> getbit qiandao 2(integer) 1

3. Count the number of days for signing in

bitcount key [start end]

127.0.0.1:6379> bitcount qiandao  # 不加参数代表统计所有的位(integer) 5127.0.0.1:6379> bitcount qiandao 0 -1(integer) 5

Seven, affairs

A transaction is a collection of commands. The commands in the transaction will be serialized and executed in order.

The atomicity of traditional transactions either succeeds or fails at the same time, but Redis transactions cannot guarantee atomicity. Only a single command can guarantee atomicity.

All the commands of the transaction are pressed into the queue one by one, and are not executed immediately, until the execution operation is finally initiated, they are executed one by one. There is no isolation level for redis transactions the concept of , so there is no concept of dirty reads, phantom reads, etc.

Redis transaction command:

  • Open transaction (multi)
  • Command 1
  • Command 2
  • ...
  • Command n
  • Execution transaction (exec)

Transaction operation

127.0.0.1:6379> multi  # 开启事务
OK
127.0.0.1:6379(TX)> set name zhong  # 命令入队列
QUEUED
127.0.0.1:6379(TX)> set age 18
QUEUED
127.0.0.1:6379(TX)> get age
QUEUED
127.0.0.1:6379(TX)> incr age
QUEUED
127.0.0.1:6379(TX)> exec  # 执行事务,开始顺序执行所有命令
1) OK
2) OK
3) "18"
4) (integer) 19

Discard transaction

127.0.0.1:6379> multi  # 开启事务OK127.0.0.1:6379(TX)> set name xiao  # 命令QUEUED127.0.0.1:6379(TX)> get name  # 命令QUEUED127.0.0.1:6379(TX)> discard  # 放弃事务OK127.0.0.1:6379> get name  # 获取name的值没有改变,说明事务没有操作"zhong"

Command exception

The command format in the transaction is abnormal, and all commands cannot be executed

127.0.0.1:6379> multi  # 开启事务OK127.0.0.1:6379(TX)> set id 1QUEUED127.0.0.1:6379(TX)> get id id  # 命令格式有异常(error) ERR wrong number of arguments for 'get' command127.0.0.1:6379(TX)> exec  # 执行事务(error) EXECABORT Transaction discarded because of previous errors.127.0.0.1:6379> get id  # 获取的值为空,说明没有设置成功(nil)

The syntax operation in the transaction is abnormal, only this command cannot be executed successfully, and all other commands are successful. It also shows that Redis transactions are not atomic .

127.0.0.1:6379> multi  # 开启事务OK127.0.0.1:6379(TX)> set id xxxQUEUED127.0.0.1:6379(TX)> incr id  # 操作语法有误,不能对字母进行加1QUEUED127.0.0.1:6379(TX)> exec  # 执行事务1) OK2) (error) ERR value is not an integer or out of range127.0.0.1:6379> get id  # 获取到值,说明set命令执行成功"xxx"

Eight, lock

Pessimistic lock and optimistic lock

Pessimistic locking, thinking that there may be problems at any time, no matter what you do, you must lock

Optimistic locking, tasks will not go wrong at any time, and will not be locked. When you update the data, judge whether anyone has modified the data during this period (get the version when you get the value in mysql, and then judge whether the version has changed when you write back to the disk after updating the data),

Watch

127.0.0.1:6379> set money 100OK127.0.0.1:6379> set out 0OK127.0.0.1:6379> watch money  # 监视money对象OK127.0.0.1:6379> multi  # 事务在这里正常执行结束,money在运行期间没有发生其他进程的改动OK127.0.0.1:6379(TX)> incrby money -20QUEUED127.0.0.1:6379(TX)> incrby out 20QUEUED127.0.0.1:6379(TX)> exec1) (integer) 802) (integer) 20

Under normal circumstances, it is executed normally as above, but if other users modify the value of money during the user's transaction under high concurrency, it will cause the transaction to fail.

The behavior is simulated below

  1. Monitor the money object in a client, and then open the transaction, but did not commit
127.0.0.1:6379> get money  # 当前值为80"80"127.0.0.1:6379> watch money  # 开始监视OK127.0.0.1:6379> multiOK127.0.0.1:6379(TX)> incrby money -50QUEUED127.0.0.1:6379(TX)> incrby out 50QUEUED# 还没有使用exec命令提交执行
  1. Open another client and modify the money object
# 在主机中进入正在运行的redis容器
[email protected]:~# docker exec -it b4a21afd7dc8 /bin/bash
# 连接redis服务
[email protected]:/data# redis-cli -p 6379
127.0.0.1:6379> get money
"80"
127.0.0.1:6379> set money 10000
OK
  1. At this point, the monitor in the first step has found that money has been modified, and then we begin to commit the transaction
127.0.0.1:6379> get money
"80"
127.0.0.1:6379> watch money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> incrby money -50
QUEUED
127.0.0.1:6379(TX)> incrby out 50
QUEUED
127.0.0.1:6379(TX)> exec  # 提交执行,运行失败,返回nil
(nil)

It can be found that watch is actually an optimistic lock

After using the exec command or the discard command, the monitored object will be automatically cancelled, or you can useunwatch command to cancel the monitoring

If you need to monitor again after canceling the monitoring, you need to call againwatch key command again

It is meaningless to use unwatch in a transaction, because as long as the transaction is monitored and opened, as long as other clients modify it, the transaction cannot be executed, even if the monitoring is canceled in the transaction.

Nine, configuration file redis.conf

1. The unit's replacement relationship

xxx

2. Include other configuration files (INCLUDES)

[xxx

3. Network configuration (NETWORK)

bind 127.0.0.1 -::1

bind: represents the listening address. By default, if the "bind" configuration command is not specified, Redis will listen for connections from all available network interfaces on the host. Binding configuration instructions, followed by one or more IP addresses, each address can be prefixed with "-", which means that if the address is not available, redis will always silently skip the address.

protected-mode yes

Whether the protection mode is on

port 6379

port

4. General configuration (GENERAL)

daemonize no

The daemon is on, the default is no, but we generally change to yes and run in the background, otherwise the terminal will be occupied after redis is started

pidfile /var/run/redis_6379.pid

If running in the background, you need to develop a pid file

loglevel notice

Log level, divided into debug, detailed verbose, notice, warning

logfile ""

The name of the log file. Empty represents standard input and output

databases 16

Number of databases, 16 by default

always-show-logo no

Whether to display the logo

5. SNAPSHOTTING key points

For persistent use, after n operations are performed within the time specified by this setting, it will be persisted to the file

The format is: save seconds changes , redis automatically saves them at any time by default, but we can modify them.

xxx
stop-writes-on-bgsave-error yes

Whether to continue working after persistence fails

rdbcompression yes

Whether to enable compression of RDB files, compression means consuming CPU resources

rdbchecksum yes

Whether to enable automatic verification of RDB files, if errors are found, they can be automatically corrected

dbfilename dump.rdb

File name of dump database

dir ./

The directory where the RDB data file is saved, and where the redis service is started, it is saved in which directory

6, master-slave replication (REPLICATION)

# replicaof <masterip> <masterport>

As a slave node, set the IP and port of the master node

7. SECURITY

# requirepass foobared

Password, commented out by default, empty password

8. Client (CLIENTS)

# maxclients 10000

Set the maximum number of clients connected to redis

9. MEMORY MANAGEMENT

# maxmemory

Set the maximum memory usage value

# maxmemory-policy noeviction

The strategy when the memory reaches its maximum value. There are six strategies

volatile-lru: LRU only the keys with expiration time set (default value)

allkeys-lru: delete the keys of the lru algorithm

volatile-random: randomly delete keys that are about to expire

allkeys-random: delete randomly

volatile-ttl: delete those that are about to expire

noeviction: never expire, return an error

10. APPEND ONLY MODE

It is also a way of persistence, referred to as aof

appendonly no

It is not turned on by default. Use rdb persistence by default, not aof

appendfilename “appendonly.aof”

File name after persistence

# appendfsync always # Synchronize every modification, which consumes performance
appendfsync everysec # Synchronize once per second, you may lose 1 second of data
# appendfsync no # Not synchronized

On behalf of when to perform a synchronization operation

10. Persistence

RDB (Redis Database)

In the specified time interval, the snapshot of the data set in the memory is written to the disk, and the persistence has been reached. Read the snapshot file directly into the memory when restoring. This mode is used for persistence by default.

When the main process is running, Redis will fork a child process to perform the persistence operation. First, write the data to a temporary file, and after the persistence operation is over, replace the temporary file with the last persisted RDB file.
The main process does not participate in file I/O operations, which ensures that the main process can work normally and with high performance.
For large-scale data recovery, and the integrity of the data recovery is not high, it is more efficient to use the RDB method. The disadvantage of RDB is that after the last persistence, due to power failure and other reasons, the new data in the memory will be lost because it does not meet the requirements for writing to disk
The default file name dump.rdb

Regarding the configuration of RDB, modify it in the snapshot item of the configuration file redis.conf file, for example

# 10秒内发生5次修改操作就进行一次持久化操作。save 10 5

Trigger mechanism

  • Meet the rules of save
  • Execute flushall command to delete all database data
  • Exit redis, shutdown command

Data recovery

After starting redis, it will check whether to use RDB according to the configuration file, check the RDB file name, check the storage location of the RDB file, and finally read the content of the RBD file and import it into the memory. Therefore, we only need to place the RDB file in the corresponding directory according to the configuration file configuration

Repair dump.rdb file

There is a redis-check-rdb tool in the installation directory, you can use this tool to scan and repair the dump.rdb file, it is not completely repaired correctly, usually delete the wrong location

AOF (Append Only File)

Record all the commands we operate, and when you need to restore, you only need to execute all the commands again.

The main process of Redis forks a child process. There are child processes that record each write operation in the form of a log. Read operations are not recorded. Only files can be appended, but files cannot be modified.
When restoring, read all the commands in the command file and execute it from the beginning. In this case, a lot of computing resources will be consumed
The default file name appendonly.aof

Aof mode is not turned on by default, we need to manually turn it on

# 配置文件开启appendonly yes
# redis-cli命令行开启127.0.0.1:6379> config get appendonly1) "appendonly"2) "no"127.0.0.1:6379> config set appendonly yesOK127.0.0.1:6379> config get appendonly1) "appendonly"2) "yes"

After opening, it will automatically generate appendonly.aoffiles in the directory where the redis service operation is started .

Then set age 18operate and see what's inside

REDIS0009�      redis-ver6.2.3�
�edis-bits�@�ctime2�`used-mem��N
 aof-preamble���namexiaoout�:money���VFq�0�v*2
$6
SELECT
$1
0
*3
$3
set
$3
age
$2
18

Repair appendonly.aof file

There is a redis-check-aof tool in the installation directory. You can use this tool to scan and repair the appendonly.aof file. It is not completely repaired. Generally, the wrong location will be deleted.

redis-check-aof --fix appendonly.aof

11. Publish and Subscribe

Subscribe to multiple channels

subscribe channel [channel …]

Send information to the specified channel

publish channel message

xxx

Cancel subscription of channel

unsubscribe [channel [channel …]]