Redis introductory study notes two

redis

12. Three special data types

12.1 GEO geographical location

Introduction:

The GEO feature of redis was launched in the redis 3.2 version. This function can store the geographic location information given by the user and operate on this information. To achieve functions that depend on geographic location information, such as nearby location and shaking . The data type of GEO is Zset.

The data structure of GEO has six commonly used commands: geoadd, geopos, geodist, georadius, georadiusbymember, gethash

Official document: https://www.redis.net.cn/order/3685.html

command:

geoadd

Add the specified geospatial location (longitude, latitude, name) to the specified key. These data will be stored in the ordered set Zset , the purpose is to facilitate the use of georadius, or georadiusbymember command to perform operations such as radius query on the data.

geoadd china:city 121.48 31.40 shanghai 113.88 22.55 shenzhen 120.21 30.20 hangzhou

Note: This command uses the standard format parameter xy, so the longitude must be before the latitude . Effective longitude -180~180 degrees, effective latitude -85.05~85.05


geopos

Return the position ( longitude and latitude ) of all elements at a given position from the key

geopos key member[member...]

geopos china:city shanghai hangzhou


geodist

Returns the distance between two given positions. The specified parameter unit must be one of the following. m meters, km kilometers, mi miles, ft feet , the default is m.

geodist key member1 member2 [unit]

geodist china:city shanghai hangzhou km


georadius

With the given latitude and longitude as the center, find the elements within a certain radius .

Range units: m meters, km kilometers, mi miles, ft feet

georadius key longitude(经度) latitude(纬度) radius m|km|ft|mi [withcoord] [withdist][withhash][asc|desc][count count]

withdist : While returning the position element, the distance between the position element and the center is also returned. The unit of the distance is consistent with the range unit given by the user.

withcoord : Return the latitude and longitude of the location element.

withhash : In the form of a 52-bit signed integer, returns the ordered set score of the position element after the original geohash code. This option is mainly used for low-level applications or debugging, and has little effect in practice

asc : According to the position of the center, the position element is returned from near to far.

desc : According to the position of the center, the position element is returned from far to near.

count : Get the first n matching elements, which can reduce bandwidth when the amount of data is large.

For example:

georadius china:city 120 30 1500 km withdist


georadiusbymember

Find the element located in the specified range, the center point is determined by the given position element .

georadiusbymember key member radius m|km|ft|mi [withcoord][withdist][withhash][asc|desc][count count]

For example:

georadiusbymember china:city shanghai 1500 km


geohash

Returns the geohash representation of one or more location elements.

Redis uses geohash to convert the two-dimensional latitude and longitude into a one-dimensional string. The longer the string, the more accurate the position, and the more similar the two strings, the closer the distance.

geohash key member[member...]

For example:

geohash china:city shanghai hangzhou


zrem

Geo does not provide a way to delete members, but because the bottom layer of geo is zset, you can use the zrem command to delete geographic location information.

zrem china:city shanghai: Remove element

zrange china:city 0 -1: View all elements

Command demonstration

12.2 HyperLogLog

Introduction

Redis added the HyperLogLog structure in version 2.8.9.

redis HyperLogLog is an algorithm used for base statistics. The advantage of HyperLogLog is that when the number and volume of input elements are very large, the space required to calculate the base is always fixed and small .

Cardinality : For example, the data set {1, 3, 5, 7, 5, 7, 8}, then the cardinality set of this data set is {1, 3, 5 ,7, 8}, and the cardinality (not repeating elements) is 5. Cardinality estimation is to quickly calculate the cardinality within the acceptable range of error.

In redis, each HyperLogLog key only needs to spend 12kb of memory to calculate the cardinality of close to 2^64 different elements. This is in sharp contrast to a collection where more elements consume more memory when calculating cardinality.

HyperLogLog is an algorithm that provides an inaccurate deduplication base number solution.

Give a chestnut: If I want to count the UV of a webpage (the number of browsing users, multiple visits by the same user in a day can only be counted once), the traditional solution is to use Set to save the user id, and then count the number of elements in the Set. Get the page UV. However, this solution can only carry a small number of users, and once the number of users grows, it will consume a lot of space to store user IDs. My purpose is to count the number of users instead of saving users, which is simply a thankless solution! Using Redis HyperLogLog requires up to 12k to count a large number of users. Although it has an error rate of about 0.81%, it can be ignored for statistics such as UV data that does not need to be very precise.

Basic command

pfadd key element[element...]: Add the specified element to HyperLogLog

pfcount key[key...]: Returns the estimated cardinality of the given HyperLogLog.

pfmerge destkey sourcekey [sourcekey...]: Combine multiple HyperLogLogs into one HyperLogLog, and calculate the union.

12.3 BitMap

Introduction

​ During development, you may encounter this situation: you need to count certain information of the user, such as active or inactive, logged in or not logged in; and if you need to record the user's check-in for a year, the check-in is 1 and no check-in It is 0. If you use ordinary key/value storage, you need to record 365 records. If the number of users is large, the space required will be large. Therefore, Redis provides a data structure of Bitmap. Bitmap is a binary operation. Bits are used for recording, that is, 0 and 1. If you want to record the check-in status of 365 days, the format of using Bitmap is roughly as follows: 0101000111000111... What are the benefits of this? Of course, it saves memory. 365 days is equivalent to 365 bits, and 1 byte = 8 bits, so it is equivalent to using 46 bytes.

​ BitMap uses a bit to represent the value or state corresponding to an element. The key is the corresponding element itself. In fact, the bottom layer is also realized by operating on the string. Redis has added several bitmap related commands such as setbit, getbit, and bitcount since version 2.2.

Basic command
  • setbit key n value: Set the value of the nth bit of the key to 0 or 1, and n starts from 0.
  • getbit key n: Get the value of the nth bit. If not set, return 0
  • bitcount key[start,end]: Count the number of keys with a value of 1.

13. Redis transaction

theory


The concept of redis transactions:

The essence of a redis transaction is a collection of commands. The transaction supports the execution of multiple commands at a time, and all commands in a transaction will be serialized. During the execution of the transaction , the commands in the pair will be executed serially in order , and the command requests submitted by other clients will not be inserted into the command sequence executed by the transaction.

In summary: Redis transaction is a one-time, sequential, exclusive execution of a series of commands in a pair.


There is no concept of isolation level for redis transactions:

The batch operation is put into the anti-column cache before sending the exec command, and will not be actually executed.


Redis does not guarantee atomicity:

In redis, a single command is executed atomically, but the transaction does not guarantee atomicity, and there is no rollback. The execution of any command in the transaction fails, and the remaining commands will still be executed.


The three stages of redis transactions:

Start transaction

Order to enqueue

Execute transaction


Related commands for redis transactions

  • watch key1 key2...: Monitor one or more keys. If the monitored key is changed by other commands before the transaction is executed, the transaction is interrupted (similar to optimistic locking)
  • unwatch: Cancel the monitoring of all keys by watch.
  • multi: Mark the beginning of a transaction block (queued)
  • exec: Execute all transaction block commands (Once the exec is executed, the monitoring locks added before will be cancelled)
  • discard: Cancel the transaction, abandon all commands in the transaction block

Optimistic lock and pessimistic lock?

  • Pessimistic Lock (Pessimistic Lock), as the name suggests, is very pessimistic. Every time you get data, you think that others will modify it, so every time you get the data, you will lock it, so that when others want to get the data, they will block knowing it got it lock. Many of these locking mechanisms are used in traditional relational databases, such as row locks, table locks, etc., read locks, write locks, etc., all of which are locked before operation.
  • Optimistic locking (Optimistic Lock) As the name suggests, is very optimistic, pick up data every time that others are not modified, so it will not be locked, but when the update will determine what others during this time did not go to update the data , You can use mechanisms such as version numbers. Optimistic locking is suitable for multi-read application types, which can improve throughput. Optimistic locking strategy: The submitted version must be greater than the current version of the record before the update can be performed.

practice

Normal execution

Abandon the transaction

If there is an imperative error in the transaction pair column (similar to java compilation error), all commands will not be executed when the exec command is executed.

If there is a grammatical error in the transaction pair column (similar to java's 1/0 runtime exception), when executing the exec command, other correct commands will be executed, and the wrong command will throw an exception.

Demonstration of the watch command (watch is used to monitor whether the key changes before and after the transaction is executed)-the transaction is successfully executed

Use watch - transaction execution failed

During the execution of the transaction, the value monitored by watch changed, which caused an error. At this time, the monitoring should be abandoned and then restarted.

**Note:** Once the exec executes the transaction, regardless of whether the transaction is executed successfully or failed, the watch's monitoring of variables will be cancelled. Therefore, when the transaction execution fails, you need to re-execute watch to monitor the variables and start a new transaction for operation.

The watch instruction is similar to optimistic locking . When the transaction is committed, if the value of any key in the multiple keys monitored by watch is changed by other clients, when the transaction is executed using exec, the transaction pair column will not be executed, and a Nullmulti-bulk notification will be returned at the same time The caller transaction failed to execute.

14. Springboot integrates redis

step:

  • Import dependencies
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

Configure in application.properties

spring.redis.host=115.29.232.195
spring.redis.port=6379
spring.redis.database=0
spring.redis.timeout=50000

test

@SpringBootTest
class SpringbootRedisApplicationTests {
    @Autowired
    private RedisTemplate redisTemplate;

    @Test
    void contextLoads() {
        /*
        * opsForList:操作list,类似string
        * opsForGeo 操作geo
        * opsForSet
        * .......
        * 和redis命令行一样。
        * */
       /*获取redis的连接对象*/
//        RedisConnection connection = redisTemplate.getConnectionFactory().getConnection();
//        connection.flushDb();
//        connection.flushAll();

        redisTemplate.opsForValue().set("mykey","kuangshen");
        System.out.println(redisTemplate.opsForValue().get("mykey"));
        redisTemplate.opsForSet().add("k5","v9");
        Set k5 = redisTemplate.opsForSet().members("k5");
        System.out.println(k5);

    }
}

Redis configuration

  • Alibaba Cloud sets the redis security group: port 6379

Modify redis.conf

( In /usr/local/bin/myredisconfig/redis.conf )

Bind 127.0.0.1 is changed to bind 0.0.0.0 ( comment out is OK )

Protected-mode yes is changed to protected-mode no: ( That is, this configuration item indicates whether to turn on the protected mode, which is turned on by default. After turning on, Redis will only access locally and deny external access)

daemonize no changed to daemonize yes

Note: re-login to redis after modification


About firewall settings

rpm -qa|grep firewalld;rpm -qa|grep firewall-config: Check if firewalld and firewall-config are installed in the system . In CentOS, firewalld is installed by default in the system, and firewall-config must be installed by yourself

yum -y update firewalld: Update firewalld to the latest version

yum -y install firewall-config: Install firewall-config

systemctl start firewalld: Start firewalld service

systemctl status firewalld: View the status of firewalld

systemctl stop firewalld: Stop firewalld service

systemctl enable firewalld: Resuming the automatic startup of firewalld service at boot

Recommended reference article: Detailed explanation of the installation and use of firewalld in CentOS7 https://blog.csdn.net/solaraceboy/article/details/78342360


Check whether the Alibaba Cloud firewall opens the port number

firewall-cmd --query-port=6379/tcp If it is yes, it is open.

If it is no

Then port 6379 is permanently open:firewall-cmd --zone=public --add-port=6379/tcp --permanent

Reload:firewall-cmd --reload

Then check whether the port is open:firewall-cmd --query-port=6379/tcp

View redis service process

 ps -ef | grep redis

If the above methods have been tried but still does not work, try restarting the server

Recommend a useful redis client tool

​ Another Redis Desktop Manager

Download link: https://github.com/qishibo/AnotherRedisDesktopManager,

You can use it to test and manage redis

Redis serialization configuration

  • According to the source code, springboot automatically generated a redisTemplate and a StringRedisTemplate in the container, but the generic type of this redisTemplate is <ooject,object>, which is inconvenient to write code, requires type conversion code, and does not set the data when redis exists , The serialization method of key and value. Therefore, a configuration class must be customized.
  • Why serialize: https://www.jianshu.com/p/cc5a29b06b3d
package com.kuang.config;

import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.net.UnknownHostException;

/**
 * Created with Intellij IDEA
 * Description:
 * user: CoderChen
 * Date: 2021-06-06
 * Time: 14:05
 */
@Configuration

public class redisConfig {
    /*编写自己的redisTemplate----固定模板*/
    @Bean
    @SuppressWarnings("all")
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
        /*为了自己开发方便,一般直接使用<String,object>*/
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(redisConnectionFactory);
        /*json序列化配置*/
        Jackson2JsonRedisSerializer Jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        Jackson2JsonRedisSerializer.setObjectMapper(om);
        /*string序列化配置*/
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();


        /*配置具体的序列化方式*/
        /*key采用string的序列化方式*/
        template.setKeySerializer(stringRedisSerializer);
        /*hash的key采用string的序列化方式*/
        template.setHashKeySerializer(stringRedisSerializer);
        /*value序列化方式采用jackson*/
        template.setValueSerializer(Jackson2JsonRedisSerializer);
        /*hash的value序列化方式采用jackson*/
        template.setHashValueSerializer(Jackson2JsonRedisSerializer);
        template.afterPropertiesSet();
        return template;
    }
}

Redis tools

  • When using redisTemplate to operate redis directly, a lot of code is required, so directly encapsulate a redisUtils, which makes it easier to write code. This redisUtils is handed over to the spring container for instantiation, and annotated and injected directly when used.
package com.kuang.utils;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;
import org.springframework.util.CollectionUtils;

import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
/**
 * Created with Intellij IDEA
 * Description:
 * user: CoderChen
 * Date: 2021-06-06
 * Time: 14:52
 */
/*在真实开发中,经常使用*/
@Component
public final class RedisUtil {
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    // =============================common============================

    /**
     * 指定缓存失效时间
     *
     * @param key  键
     * @param time 时间(秒)
     */
    public boolean expire(String key, long time) {
        try {
            if (time > 0) {
                redisTemplate.expire(key, time, TimeUnit.SECONDS);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 根据key 获取过期时间
     *
     * @param key 键 不能为null
     * @return 时间(秒) 返回0代表为永久有效
     */
    public long getExpire(String key) {
        return redisTemplate.getExpire(key, TimeUnit.SECONDS);
    }

    /**
     * 判断key是否存在
     *
     * @param key 键
     * @return true 存在 false不存在
     */
    public boolean hasKey(String key) {
        try {
            return redisTemplate.hasKey(key);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 删除缓存
     *
     * @param key 可以传一个值 或多个
     */
    @SuppressWarnings("unchecked")
    public void del(String... key) {
        if (key != null && key.length > 0) {
            if (key.length == 1) {
                redisTemplate.delete(key[0]);
            } else {
                redisTemplate.delete((Collection<String>) CollectionUtils.arrayToList(key));
            }
        }
    }
// ============================String=============================

    /**
     * 普通缓存获取
     *
     * @param key 键
     * @return 值
     */
    public Object get(String key) {
        return key == null ? null : redisTemplate.opsForValue().get(key);
    }

    /**
     * 普通缓存放入
     *
     * @param key   键
     * @param value 值
     * @return true成功 false失败
     */
    public boolean set(String key, Object value) {
        try {
            redisTemplate.opsForValue().set(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 普通缓存放入并设置时间
     *
     * @param key   键
     * @param value 值
     * @param time  时间(秒) time要大于0 如果time小于等于0 将设置无限期
     * @return true成功 false 失败
     */
    public boolean set(String key, Object value, long time) {
        try {
            if (time > 0) {
                redisTemplate.opsForValue().set(key, value, time,
                        TimeUnit.SECONDS);
            } else {
                set(key, value);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 递增
     *
     * @param key   键
     * @param delta 要增加几(大于0)
     */
    public long incr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("递增因子必须大于0");
        }
        return redisTemplate.opsForValue().increment(key, delta);
    }

    /**
     * 递减
     *
     * @param key   键
     * @param delta 要减少几(小于0)
     */
    public long decr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("递减因子必须大于0");
        }
        return redisTemplate.opsForValue().increment(key, -delta);
    }
// ================================Map=================================

    /**
     * HashGet
     *
     * @param key  键 不能为null
     * @param item 项 不能为null
     */
    public Object hget(String key, String item) {
        return redisTemplate.opsForHash().get(key, item);
    }

    /**
     * 获取hashKey对应的所有键值
     *
     * @param key 键
     * @return 对应的多个键值
     */
    public Map<Object, Object> hmget(String key) {
        return redisTemplate.opsForHash().entries(key);
    }

    /**
     * HashSet
     *
     * @param key 键
     * @param map 对应多个键值
     */
    public boolean hmset(String key, Map<String, Object> map) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * HashSet 并设置时间
     *
     * @param key  键
     * @param map  对应多个键值
     * @param time 时间(秒)
     * @return true成功 false失败
     */
    public boolean hmset(String key, Map<String, Object> map, long time) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 向一张hash表中放入数据,如果不存在将创建
     *
     * @param key   键
     * @param item  项
     * @param value 值
     * @return true 成功 false失败
     */
    public boolean hset(String key, String item, Object value) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 向一张hash表中放入数据,如果不存在将创建
     *
     * @param key   键
     * @param item  项
     * @param value 值
     * @param time  时间(秒) 注意:如果已存在的hash表有时间,这里将会替换原有的时间
     * @return true 成功 false失败
     */
    public boolean hset(String key, String item, Object value, long time) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 删除hash表中的值
     *
     * @param key  键 不能为null
     * @param item 项 可以使多个 不能为null
     */
    public void hdel(String key, Object... item) {
        redisTemplate.opsForHash().delete(key, item);
    }

    /**
     * 判断hash表中是否有该项的值
     *
     * @param key  键 不能为null
     * @param item 项 不能为null
     * @return true 存在 false不存在
     */
    public boolean hHasKey(String key, String item) {
        return redisTemplate.opsForHash().hasKey(key, item);
    }

    /**
     * hash递增 如果不存在,就会创建一个 并把新增后的值返回
     *
     * @param key  键
     * @param item 项
     * @param by   要增加几(大于0)
     */
    public double hincr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, by);
    }

    /**
     * hash递减
     *
     * @param key  键
     * @param item 项
     * @param by   要减少记(小于0)
     */
    public double hdecr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, -by);
    }
// ============================set=============================

    /**
     * 根据key获取Set中的所有值
     *
     * @param key 键
     */
    public Set<Object> sGet(String key) {
        try {
            return redisTemplate.opsForSet().members(key);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * 根据value从一个set中查询,是否存在
     *
     * @param key   键
     * @param value 值
     * @return true 存在 false不存在
     */
    public boolean sHasKey(String key, Object value) {
        try {
            return redisTemplate.opsForSet().isMember(key, value);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将数据放入set缓存
     *
     * @param key    键
     * @param values 值 可以是多个
     * @return 成功个数
     */
    public long sSet(String key, Object... values) {
        try {
            return redisTemplate.opsForSet().add(key, values);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 将set数据放入缓存
     *
     * @param key    键
     * @param time   时间(秒)
     * @param values 值 可以是多个
     * @return 成功个数
     */
    public long sSetAndTime(String key, long time, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().add(key, values);
            if (time > 0)
                expire(key, time);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 获取set缓存的长度
     *
     * @param key 键
     */
    public long sGetSetSize(String key) {
        try {
            return redisTemplate.opsForSet().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 移除值为value的
     *
     * @param key    键
     * @param values 值 可以是多个
     * @return 移除的个数
     */
    public long setRemove(String key, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().remove(key, values);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }
// ===============================list=================================

    /**
     * 获取list缓存的内容
     *
     * @param key   键
     * @param start 开始
     * @param end   结束 0 到 -1代表所有值
     */
    public List<Object> lGet(String key, long start, long end) {
        try {
            return redisTemplate.opsForList().range(key, start, end);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * 获取list缓存的长度
     *
     * @param key 键
     */
    public long lGetListSize(String key) {
        try {
            return redisTemplate.opsForList().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 通过索引 获取list中的值
     *
     * @param key   键
     * @param index 索引 index>=0时, 0 表头,1 第二个元素,依次类推;index<0
     *              时,-1,表尾,-2倒数第二个元素,依次类推
     */
    public Object lGetIndex(String key, long index) {
        try {
            return redisTemplate.opsForList().index(key, index);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     */
    public boolean lSet(String key, Object value) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     * @param time  时间(秒)
     */
    public boolean lSet(String key, Object value, long time) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            if (time > 0)
                expire(key, time);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     * @return
     */
    public boolean lSet(String key, List<Object> value) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     * @param time  时间(秒)
     * @return
     */
    public boolean lSet(String key, List<Object> value, long time) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            if (time > 0)
                expire(key, time);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 根据索引修改list中的某条数据
     *
     * @param key   键
     * @param index 索引
     * @param value 值
     * @return
     */
    public boolean lUpdateIndex(String key, long index, Object value) {
        try {
            redisTemplate.opsForList().set(key, index, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 移除N个值为value
     *
     * @param key 键
     * @param count 移除多少个
     * @param value 值
     * @return 移除的个数
     */
        public long lRemove (String key,long count, Object value){
            try {
                Long remove = redisTemplate.opsForList().remove(key, count,value);
                return remove;
            } catch (Exception e) {
                e.printStackTrace();
                return 0;
            }
        }
    }


Test code

package com.kuang;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.kuang.pojo.User;
import com.kuang.utils.RedisUtil;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.redis.connection.RedisConnection;
import org.springframework.data.redis.core.RedisTemplate;

import java.util.Set;

@SpringBootTest
class SpringbootRedisApplicationTests {
    @Autowired
    @Qualifier("redisTemplate")/*避免与源码重合,跳转到自定义redisTempalte*/
    private RedisTemplate redisTemplate;

    @Autowired
    private RedisUtil redisUtil;


    @Test
    void contextLoads() {
        /*
        * opsForList:操作list,类似string
        * opsForGeo 操作geo
        * opsForSet
        * .......
        * 和redis命令行一样。
        * */
       /*获取redis的连接对象*/
//        RedisConnection connection = redisTemplate.getConnectionFactory().getConnection();
//        connection.flushDb();
//        connection.flushAll();

        redisTemplate.opsForValue().set("mykey","kuangshen");
        System.out.println(redisTemplate.opsForValue().get("mykey"));
        redisTemplate.opsForSet().add("k5","v9");
        Set k5 = redisTemplate.opsForSet().members("k5");
        System.out.println(k5);



    }

    @Test
    public void test() throws JsonProcessingException {
        /*真实开发一般使用json传递数据*/
        User user = new User("kuangshen", 3);
//        String jsonUser = new ObjectMapper().writeValueAsString(user);
//        redisTemplate.opsForValue().set("user", jsonUser);
        redisTemplate.opsForValue().set("user", user);
        System.out.println(redisTemplate.opsForValue().get("user"));

    }

    @Test
    public void test1() {
        redisUtil.set("username", "coderchen");
        System.out.println(redisUtil.get("username"));
    }

}

15. conf configuration file analysis

Familiar with the basic configuration

  • The location of the redis configuration file is /usr/local/redis/redis-6.06 redis.conf, but we usually copy it to /usr/local/bin/myredisconfig redis.conf to operate and modify the configuration file. Ensure the safety of the initial documents.

View the basic configuration in redis

config get *: Get all the configuration, you can check whether the configuration is modified successfully.

Units

Configure the size unit, some basic measurement units are defined at the beginning, only bytes are supported, not bits

Not case sensitive


includes

Similar to the spring configuration file, it can be included through includes, redis.conf is the total file, and other files can be included. (Read the previous English description to know)


network network configuration

bind 0.0.0.0: bind host address

protected-mode no: Protected mode, the default is yes, the default is on, after opening Redis will only access locally and deny external access. Need to be set to yes

port 6379: the default port


general

daemonize yes: By default, redis does not run as a daemon process. If you need to enable it, change it to yes

supervised no: The redis daemon can be managed through upstart and systemd.

pidfile /var/run/redis_6379.pid: To run redis as a background process, you need to specify the pid file.

loglevel notice: log level. The options are:

debug: record a large amount of log information, suitable for development and testing phases
verbose: more log information
notice: Appropriate amount of log information, suitable for production environment
warning: Only part of the important information will be recorded

logfile "": The location of the log file, when specified as an empty string, it is the standard output.

databases 16: Set the number of databases, the default database is DB 0

always-show-logo yes: whether to always show the logo


snapshopting snapshot

[External link image transfer failed. The source site may have an anti-leech link mechanism. It is recommended to save the image and upload it directly (img-BF2Vf7Dn-1623067788419)(https://cdn.jsdelivr.net/gh/chenshuai-coder/CDN/img /20210607103041.png)]

save 900 1: At least one key value change in 900 seconds (15 minutes) (database save-persistence)
save 300 10: 300 seconds (within 5 minutes) at least 10 key value changes (database save-persistence
save 60 10000: At least 10000 key value changes within 60 seconds (1 minute) (database save-persistence)

stop-writes-on-bgsave-error yes: whether the persistence still works after an error occurs

rdbcompression yes: use compressed rdb files yes: compression but need some cpu consumption no: no compression, more disk space is required.

rdbchecksum yes: Whether to verify the RDB file is more conducive to the fault tolerance of the file, but there will be about 10% performance loss when saving the RDB file

dbfilename dump.rdb: dbfilenamerdb file name

dir ./: dir data directory, the database will be written in this directory. rdb, aof files will also be written in this directory


security

Obtain and set password: (need to enter redis first)

config get requirepass

config set requirepass "123456"

Then every time you log in to redis, you must use auth authentication:

auth password

Note: If the server uses redis without setting a password, the information may be attacked and lost.

Set the password in conf, remove the comment requirepass, and change the following field to the corresponding password.

clients

maxclients 10000: Set the maximum number of client connections that can connect to redis

memory management

maxmemory: The maximum memory capacity configured by redis

maxmemory-policy noeviction: maxmemory-policy memory reaches the upper limit processing strategy

Volatile-lru: Use the LRU algorithm to remove the keys with an expiration time set.
volatile-random: randomly remove keys with expiration time set.
volatile-ttl: Remove the key that is about to expire, and delete it according to the latest expiration time (supplemented by TTL)
allkeys-lru: Use the LRU algorithm to remove any key.
allkeys-random: Remove any keys randomly.
noeviction: do not remove any key, just return a write error

append only mode

Common configuration introduction

  1. Redis is not performed as a daemon by default. You can modify this configuration item and use yes to start the daemon .
daemonize no —> daemonize yes
  1. When Redis is running as a daemon, Redis will write the pid to the /var/run/redis.pid file by default, which can be specified by pidfile
pidfile /var/run/redis.pid
  1. Specify the Redis listening port. The default port is 6379. The author explained in a blog post why he chose 6379 as the default port, because 6379 is the number corresponding to MERZ on the phone button, and MERZ is taken from the name of the Italian singer Alessia Merz
port 6379
  1. Bound host address
bind 127.0.0.1
  1. When the client is idle for how long to close the connection, if it is specified as 0, it means to close the function
timeout 300
  1. Specify the logging level, Redis supports a total of four levels: debug, verbose, notice, warning, the default is verbose
loglevel verbose
  1. The logging mode is standard output by default. If Redis is configured to run as a daemon, and the logging mode is configured as standard output here, the log will be sent to /dev/null
logfile stdout
  1. Set the number of databases, the default database is 0, you can use the SELECT command to specify the database id on the connection
databases 16
  1. Specify how many update operations within a period of time, the data will be synchronized to the data file, multiple conditions can be matched
save
Three conditions are provided in the Redis default configuration file:
save 900 1
save 300 10
save 60 10000
It means that there are 1 change in 900 seconds (15 minutes), 10 changes in 300 seconds (5 minutes), and 10,000 changes in 60 seconds.
  1. Specify whether to compress data when storing to the local database. The default is yes. Redis uses LZF compression. If you want to save CPU time, you can turn off this option, but the database file will become huge
rdbcompression yes
  1. Specify the local database file name, the default value is dump.rdb
dbfilename dump.rdb
  1. Specify the local database storage directory
dir ./
  1. Set when this machine is the slav service, set the IP address and port of the master service, when Redis starts, it will automatically synchronize data from the master
slaveof
  1. When the master service is set to password protection, the slav service connects to the master password
masterauth
  1. Set the Redis connection password. If the connection password is configured, the client needs to provide the password through the AUTH command when connecting to Redis, which is closed by default
requirepass foobared
  1. Set the maximum number of client connections at the same time. The default is unlimited. The number of client connections that Redis can open at the same time is the maximum number of file descriptors that can be opened by the Redis process. If you set maxclients 0, it means that there is no limit. When the number of client connections reaches the limit, Redis will close the new connection and return a max number of clients reached error message to the client
maxclients 128
  1. Specify the Redis maximum memory limit. Redis will load data into the memory when it starts. After the maximum memory is reached, Redis will first try to clear the expired or about to expire keys. After this method is processed, the maximum memory setting is still reached. No more write operations can be performed, but read operations can still be performed. Redis’s new vm mechanism will store Key in memory, and Value will be stored in the swap area
maxmemory
  1. Specify whether to log after each update operation. Redis writes data to disk asynchronously by default. If it is not turned on, it may cause data loss for a period of time when the power is off. Because redis itself synchronizes data files according to the above save conditions, some data will only exist in memory for a period of time. The default is no
appendonly no
  1. Specify the update log file name, the default is appendonly.aof
appendfilename appendonly.aof
  1. Specify the update log condition, there are 3 optional values:
no: Means to wait for the operating system to synchronize the data cache to the disk (fast)
always: Indicates that fsync() is manually called to write data to disk after each update operation (slow, safe)
everysec: means to synchronize once every second (compromise, default value)
appendfsync everysec
  1. Specify whether to enable the virtual memory mechanism, the default value is no, a brief introduction, the VM mechanism stores data in pages, Redis will swap the less accessed pages, that is, cold data to disk, and the more accessed pages will be automatically swapped out by the disk. Into memory
vm-enabled no
  1. Virtual memory file path, the default value is /tmp/redis.swap, cannot be shared by multiple Redis instances
vm-swap-file /tmp/redis.swap
  1. Store all data larger than vm-max-memory in virtual memory, no matter how small vm-max-memory is set, all index data is stored in memory (Redis index data is keys), that is, when vm-max When -memory is set to 0, in fact all values ​​exist on the disk. The default value is 0
vm-max-memory 0
  1. The Redis swap file is divided into many pages. One object can be stored on multiple pages, but one page cannot be shared by multiple objects. The vm-page-size should be set according to the size of the stored data. The author suggests that if Store a lot of small objects, the page size is best set to 32 or 64bytes; if you store large objects, you can use a larger page, if you are not sure, use the default value
vm-page-size 32
  1. Set the number of pages in the swap file. Since the page table (a bitmap indicating that the page is free or used) is placed in memory, every 8 pages on the disk will consume 1 byte of memory
vm-pages 134217728
  1. Set the number of threads to access the swap file. It is best not to exceed the number of cores of the machine. If it is set to 0, then all operations on the swap file are serial, which may cause a longer delay. The default value is 4
vm-max-threads 4
  1. Set whether to combine smaller packets into one packet and send when replying to the client, the default is on
glueoutputbuf yes
  1. Specify that when a certain number or the largest element exceeds a certain critical value, a special hash algorithm is used
hash-max-zipmap-entries 64
hash-max-zipmap-value 512
  1. Specify whether to activate the reset hash, the default is on
activerehashing yes
  1. Specify to include other configuration files, you can use the same configuration file between multiple Redis instances on the same host, and each instance has its own specific configuration file
include /path/to/local.conf