Mall-lua, OpenResty realize advertising caching

Home Analysis

table:

The homepage portal system needs to display various advertising data. As shown in the figure, take jd as an example:

How to improve the access speed of data that is changed infrequently?

1.数据做成静态页[商品详情页]2.做缓存[Redis]

Lua installation

It has been installed in the virtual machine, slightly

Test to see if lua is installed successfully:

ctrl+c exit

The basic grammar of LUA (understand)

Just understand, skip it here, it's not our focus

Introduction to OpenResty

OpenResty (also known as: ngx_openresty) is a scalable web platform based on nginx, initiated by Chinese Zhang Yichun, and provides many high-quality third-party modules.

OpenResty is a powerful web application server. Web developers can use the Lua scripting language to mobilize various C and Lua modules supported by Nginx, and more importantly, in terms of performance.

OpenResty can quickly construct an ultra-high-performance Web application system capable of responding to concurrent connections of more than 10K.

360, UPYUN, Alibaba Cloud, Sina, Tencent, Qunar, Kugou Music, etc. are all deep users of OpenResty.

OpenResty's simple understanding of achievements is equivalent to encapsulating nginx and integrating LUA scripts. Developers only need to simply provide modules to implement related logic, instead of writing Lua scripts in nginx as before. Call it again.

The original nginx concurrency was about 50,000, but now it's greatly improved! OpenResty is 10,000-1 million.

Install openresty

linux install openresty:

1. Add warehouse execution commands

 yum install yum-utils yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo

2. Perform the installation

yum install openresty

3. After the installation is successful, in the default directory:

/usr/local/openresty

The installation has been successful!

Install nginx

Nginx has been installed by default, under the directory: /usr/local/openresty/nginx.

Modify /usr/local/openresty/nginx/conf/nginx.conf and set the root used by the configuration file to root. The purpose is to directly load the lua script under root when the lua script is to be used in the future.

cd /usr/local/openresty/nginx/confvi nginx.conf

Modify the code as follows:

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Step by step, I found that the virtual machine has been configured for me, so I don't need to configure it for the time being.

The details are as follows:

user  root root;worker_processes  1; events {    worker_connections  1024;} http {    include       mime.types;    default_type  application/octet-stream;    #定义Nginx缓存模块,模块名字叫dis_cache,容量大小128M    lua_shared_dict dis_cache 128m;    #限流设置    limit_req_zone $binary_remote_addr zone=contentRateLimit:10m rate=2r/s;    #根据IP地址来限制,存储内存大小10M    limit_conn_zone $binary_remote_addr zone=addr:10m;    #个人IP显示    limit_conn_zone $binary_remote_addr zone=perip:10m;    #针对整个服务所有的并发量控制    limit_conn_zone $server_name zone=perserver:10m;     sendfile        on;    #tcp_nopush     on;    #keepalive_timeout  0;    keepalive_timeout  65;    #gzip  on;     server {        listen       80;        #监听的域名        server_name  localhost;        #192.168.211.1        location /brand {            limit_conn perip 3;      #单个客户端ip与服务器的连接数.            limit_conn perserver 5;  #限制与服务器的总连接数            #同一个IP只允许有2个并发连接            #limit_conn addr 2;            #所有以/brand的请求,都将交给  192.168.211.1服务器的18081程序处理.            proxy_pass http://192.168.211.1:18081;        }        #表示所有以 localhost/read_content的请求都由该配置处理        location /read_content {            #使用指定限流配置,burst=4表示允许同时有4个并发连接,如果不能同时处理,则会放入队列,等请求处理完成后,再从队列中拿请求            #nodelay 并行处理所有请求            limit_req zone=contentRateLimit burst=4 nodelay;            #content_by_lua_file:所有请求都交给指定的lua脚本处理(/root/lua/read_content.lua)            content_by_lua_file /usr/local/server/lua65/read_content.lua;        }        #表示所有以 localhost/update_content的请求都由该配置处理"nginx.conf" 65L, 2072C

Hey, unexpectedly, there are so many configurations, indicating that the virtual machine has been configured for me, so I don't need to configure it for the time being.

To exit, press Esc——>shift+colon——>q!

Supplement the simple operation of exiting vi in ​​linux: https://blog.csdn.net/weixin_43970743/article/details/97144222

进入编辑模式,按 o 进行编辑 编辑结束,按ESC 键 跳到命令模式,然后输入退出命令: :w保存文件但不退出vi 编辑 :w! 强制保存,不退出vi 编辑 :w file将修改另存到file中,不退出vi 编辑 :wq保存文件并退出vi 编辑 :wq!强制保存文件并退出vi 编辑 q:不保存文件并退出vi 编辑 :q!不保存文件并强制退出vi 编辑 :e!放弃所有修改,从上次保存文件开始在编辑

Test visit:

Restart the centos virtual machine, and then access to test Nginx

Visit address: http://192.168.2.132/

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Loading and reading of ad cache

Lua+Nginx configuration

(1) Implementation ideas-put query data into redis

Realization ideas:

Definition request: used to query the data in the database to update to redis.

a. Connect to mysql, read the advertisement list according to the advertisement classification ID, and convert it into a json string.

b. Connect to redis, and save the advertisement list json string into redis.

Definition request:

请求:	/update_content参数:	id  --指定广告分类的id返回值:	json

Request address:<http://192.168.2.132/update_content?id=1>

Create the /root/lua directory, and create update_content.lua in this directory: The purpose is to connect to mysql to query data and store it in redis.

The content of update_content.lua is as follows:

ngx.header.content_type="application/json;charset=utf8"local cjson = require("cjson")local mysql = require("resty.mysql")local uri_args = ngx.req.get_uri_args()local id = uri_args["id"] local db = mysql:new()db:set_timeout(1000)local props = {    host = "192.168.2.132",    port = 3306,    database = "changgou_content",    user = "root",    password = "123456"} local res = db:connect(props)local select_sql = "select url,pic from tb_content where status ='1' and category_id="..id.." order by sort_order"res = db:query(select_sql)db:close() local redis = require("resty.redis")local red = redis:new()red:set_timeout(2000) local ip ="192.168.2.132"local port = 6379red:connect(ip,port)red:set("content_"..id,cjson.encode(res))red:close() ngx.say("{flag:true}")

For details, please refer to the project documentation

The teacher explained:

It is only for testing at present, and nginx is not stored for the time being:

Explain each configuration:

Here is my operation:

Found that the file already exists:

But the ip is wrong, you need to change it to 192.168.2.132 later

Press insert to start modification, after modification, press Esc——>Shift+colon——>wq——Enter to exit


It is found that it has been matched:

But to change content_by_lua_file /usr/local/server/lua65/read_content.lua;, because I did not create a new lua65 folder, delete, save, and exit

Later, I found that there were still two more folders and deleted them too

Hey, I found that it is not usr, but the root folder

Remember to reload after repairing and saving:

cd ../sbin

./nginx -s reload

Test: http://192.168.2.132/update_content?id=1

success!

Also stored in redis

The memory can be saved, but I can't read it. Let's learn to use OpenResty to read the advertisement cache.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

(2) Implementation ideas-get data from redis

Realization ideas:

Define the request, the user gets the list of advertisements according to the ID of the advertisement classification. Data can be obtained directly from redis through lua script.

Definition request:

请求:/read_content参数:id返回值:json

Create read_content.lua in the /root/lua directory:

--设置响应头类型ngx.header.content_type="application/json;charset=utf8"--获取请求中的参数IDlocal uri_args = ngx.req.get_uri_args();local id = uri_args["id"];--引入redis库local redis = require("resty.redis");--创建redis对象local red = redis:new()--设置超时时间red:set_timeout(2000)--连接local ok, err = red:connect("192.168.211.132", 6379)--获取key的值local rescontent=red:get("content_"..id)--输出到返回响应中ngx.say(rescontent)--关闭连接red:close()

The configuration in /usr/local/openresty/nginx/conf/nginx.conf is as follows:

My operation is as follows:

The read_content.lua file already exists

The ip address is wrong, modify it to 192.168.2.132, save and exit

The teacher explained the configuration:

(3) Join openresty local cache

There is no problem with the above method, but if all the requests go to redis, the redis pressure is also very high, so we generally use multi-level caching to reduce the service pressure of the downstream system. Refer to the realization of the basic idea map.

First query the openresty local cache if not

Then query the data in redis, if not

Then query the data in mysql, and return it whenever there is data.

Modify the read_content.lua file, the code is as follows:

My operation is as follows:

Test: http://192.168.2.132/read_content?id=1

success! Even if the data in the database table is deleted, it can be read from redis. . .