Nginx performance optimization

Performance optimization overview

Before doing performance optimization, we need to consider the following
1. Current system structure bottlenecks
Observation indicators
Pressure test ab (httpd-tools) webbanch
2. Understanding business models Accessing
business types
System hierarchical structure
3. Performance and security
performance Good security, weak
security, good performance, low performance

Pressure test tool

1. Install pressure test tool ab

yum install httpd-tools -y

2. Understand how the pressure measurement tool is used

ab -n 200 -c 2 http://127.0.0.1/ 
-n Total number of requests
-c Number of concurrent requests
-k Whether to open the connection

3. Configure Nginx static website and tomcat dynamic website environment
to prepare static website for Nginx

mkdir -p /soft/code
echo "<h1> Ab Load </h1>" > /soft/code/bgx.html

Prepare static website files for Tomcat

yum -y install tomcat
mkdir -p /usr/share/tomcat/webapps/ROOT/
echo "<h1> Ab Load </h1>" > /usr/share/tomcat/webapps/ROOT/bgx.html
vi /usr/local/nginx/conf/nginx.conf

            root /soft/code;
            try_files $uri $uri/ @java_page;
            index index.jsp index.html;
        }
        location @java_page{
            proxy_pass http://192.168.1.1:8080;
        }
    }
Insert picture description here


4. Use the ab tool to carry out the pressure test

##进⾏压⼒测试
ab -n2000 -c2 http://192.168.1.1/bgx.html 
ab -n2000 -c2 http://192.168.1.1:8080/bgx.html 
Server Software:        nginx/1.18.0   
Server Hostname:        192.168.1.1
Server Port:            80

Document Path:          /bgx.html
Document Length:        19 bytes

Concurrency Level:      2
Time taken for tests:   0.104 seconds    ##总花费总时见

Complete requests:      2000             ##总请求数

Failed requests:        0                ##请求失败数
Write errors:           0
Total transferred:      500000 bytes
HTML transferred:       38000 bytes

Requests per second:    19140.59 [#/sec] (mean)                            ##每秒多少请求/s(总请求出/总共完成的时间) 
Time per request:       0.104 [ms] (mean)                                  ##客户端访问服务端, 单个请求所需花费的时间
Time per request:       0.052 [ms] (mean, across all concurrent requests)  ##服务端处理请求的时间
Transfer rate:          4673.00 [Kbytes/sec] received                      ##判断⽹络传输速率, 观察⽹络是否存在瓶颈

5. Move the bgx file under nginx, and the pressure test will be processed by tomcat again


Affect performance indicators

Affect performance, so overall attention

1. Open networks
Open networks traffic
Open networks whether the loss
of these requests will affect the tone of Use http
2. The system
hardware does not damage the disk, the disk speed
system load, memory, system stability
3. Service
connection optimization, optimization request
in accordance with Business form to make corresponding service settings
4. Program
access performance
processing speed
program execution efficiency
5.
Each database architecture service and service are more or less related, we need to layer the entire architecture, Find the shortcomings of the corresponding system or service, and then optimize

System performance optimization

File handles, Linux are all files. File handles can be understood as an index
file handle that will increase frequently as our processes are called. The
system defaults on file handles, and cannot allow a process to be unlimited. The call
needs to limit the use of multiple file handles for each process and each service. The file
handle is an optimization parameter
setting method that must be adjusted. The
system is modified globally and the user is modified
locally.
Error:too  many open files

vim /etc/security/limits.conf 
//针对root⽤户 
root soft nofile 65535
root hard nofile 65535 
//所有⽤户, 全局 
* soft nofile 25535 
* hard nofile 25535 
//对于Nginx进程 
worker_rlimit_nofile 45535; 
//root⽤户 
//soft提醒 
//hard限制 
//nofile⽂件数配置项 
//65535

Nginx performance optimization

CPU affinity, reduce frequent migration between processes, reduce performance loss
1. View current CPU physical status

[[email protected] ~]# lscpu |grep "CPU(s)" 
CPU(s): 24 
On-line CPU(s) list: 0-23 
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22 
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23 
//2颗物理cpu,每颗cpu12核⼼, 总共24核⼼ 

2. Bind Nginx worker processes to different cores

//启动多少worker进程, 官⽅建议和cpu核⼼⼀致, 第⼀种绑定组合⽅式 
#worker_processes 24; 
#worker_cpu_affinity 000000000001 000000000010 000000000100 000000001000 0000000100 
00 000000100000 000001000000 000010000000 000100000000 001000000000 010000000000 10 
000000000; 
//第⼆种⽅式 
#worker_processes 2; 
#worker_cpu_affinity 101010101010 010101010101; 
//最佳⽅式绑定⽅式 
worker_processes auto; 
worker_cpu_affinity auto;

3. View the nginx worker process binding to the corresponding cpu

ps -eo pid,args,psr|grep [n]ginx

4. Nginx universal optimization configuration file

[[email protected] ~]# cat nginx.conf 
user nginx; 
worker_processes auto; 
worker_cpu_affinity auto; 
error_log /var/log/nginx/error.log warn; 
pid /run/nginx.pid; 
#调整⾄1w以上,负荷较⾼建议2-3w以上 
worker_rlimit_nofile 35535;
events { 
use epoll; 
#限制每个进程能处理多少个连接请求,10240x16 
worker_connections 10240; 
} 
http { 
include /etc/nginx/mime.types; 
default_type application/octet-stream; 
#统⼀使⽤utf-8字符集 
charset utf-8; 
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' 
'$status $body_bytes_sent "$http_referer" ' 
'"$http_user_agent" "$http_x_forwarded_for"'; 
access_log /var/log/nginx/access.log main; 
#Core module 
sendfile on; 
#静态资源服务器建议打开 
tcp_nopush on; 
#动态资源服务建议打开,需要打开keepalived 
tcp_nodelay on; 
keepalive_timeout 65; 
#Gzip module 
gzip on; 
gzip_disable "MSIE [1-6]\."; 
gzip_http_version 1.1; 
#Virtal Server 
include /etc/nginx/conf.d/*.conf; 
}

Nginx FAQ

1. Nginx multiple same Server_name priority
2. location matching priority
3. try_files use
4. Ngnx alias and root difference

Server priority

Nginx multiple same Server_name priority
1. Environment preparation

mkdir /soft/code{1..3} -p
for i in {1..3};do echo "<h1>Code $i</h1>" > /soft/code"$i"/index.html;done

2. Modify the configuration file

vi /usr/local/nginx/conf/nginx.conf
    server {
        listen      80;
        server_name testserver1 192.168.1.1;
        location / {
            root /soft/code1;
            index index.html;
        }
        }
    server {
        listen      80;
        server_name testserver2 192.168.1.1;
        location / {
            root /soft/code2;
            index index.html;
        }
        }
    server {
        listen      80;
        server_name testserver3 192.168.1.1;
        location / {
            root /soft/code3;
            index index.html;
        }
    }
Insert picture description here
nginx -t                  ##检查语法
systemctl restart nginx   ##重启nginx服务

3. Test the effect of the visit

[[email protected] ~]# curl 192.168.1.1
<h1>Code 1</h1>

When the servername is the same, the priority of access is from top to bottom


location priority

A server has multiple locations

Insert picture description here
= Regular match^~ ~ ~* Default match

1. Example preparation

vi /usr/local/nginx/conf/nginx.conf
    server {
        listen      80;
        server_name 192.168.1.1;
        root /soft;
        index index.html;
        location = /code1/ {
            rewrite ^(.*)$ /code1/index.html break;
        }
        location ~ /code* {
            rewrite ^(.*)$ /code3/index.html break;
        }
        location ^~ /code {
            rewrite ^(.*)$ /code2/index.html break;
        }
    }
Insert picture description here


2. Test results

[[email protected] ~]# curl http://192.168.1.1/code1
<h1>Code 2</h1>

Comment out ^~, restart Nginx

Insert picture description here
systemctl restart nginx

Test effect

[[email protected] ~]# curl http://192.168.1.1/code1
<h1>Code 3</h1>

Comment out the exact match=, restart Nginx

Insert picture description here
systemctl restart nginx

Test effect

[[email protected] ~]# curl http://192.168.1.1/code1
<h1>Code 2</h1>

The difference between Break and last

Break: Page not found, stop matching
Last: Page not found, continue matching downward

Use of try_files

nginx 的 try_files 按顺序检查⽂件是否存在
location /{ 
try_files $uri $uri/ /index.php; 
} 
http://192.168.1.1/zps
$uri      判断zps目录是否存在
$uri/     如果zps目录存在,那么就将zps目录下的index.html文件解析之后返回给客户端
/index.php   如果index.html不存在,那么就看zps目录下是否存在index.php页面
#1.检查⽤户请求的uri内容是否存在本地,存在则解析 
#2.将请求加/, 类似于重定向处理 
#3.最后交给index.php处理

1. Demonstration environment preparation

echo "Try-Page" > /usr/local/nginx/html/index.html
echo "Tomcat-Page" > /soft/app/apache-tomcat-9.0.7/webapps/ROOT/index.html 
sh /soft/app/apache-tomcat-9.0.7/bin/startup.sh
netstat -lntp|grep 8080

2. Configure Nginx tryfiles

vi /usr/local/nginx/conf/nginx.conf
server { 
listen 80; 
server_name 192.168.1.1; 
root /soft/code; 
index index.html; 
location / { 
try_files $uri @java_page; 
} 
location @java_page { 
proxy_pass http://127.0.0.1:8080; 
} 
} 
nginx -s reload    ##重启Nginx 

3. Test tryfiles

[[email protected] ~]# curl http://192.168.1.1/index.html 
Try-Page 

4. Move the /soft/code/index.html file to

mv /soft/code/{index.html,index.html_bak}

5. Found that the request was spit back by Tomcat

[[email protected] ~]# curl http://192.168.1.1/index.html 
Tomcat-Page

The difference between alias and root

root path configuration

//Nginx的root配置 
[[email protected] ~]# vi /usr/local/nginx/conf/nginx.conf
server { 
listen 80; 
index index.html; 
location /request_path/code/ { 
root /local_path/code/;
} 
} 
//请求测试 
[[email protected] conf.d]# curl http://192.168.1.1/request_path/code/index.html 
Root 
//实际请求本地⽂件路径为 
/local_path/code/'request_path/code'/index.html
alias 路径配置
[[email protected] ~]# mkdir /local_path/code/request_path/code/ -p 
[[email protected] ~]# echo "Alias" > /local_path/code/index.html 
//配置⽂件 
[[email protected] ~]# vi /usr/local/nginx/conf/nginx.conf
server { 
listen 80; 
index index.html; 
location /request_path/code/ { 
alias /local_path/code/; 
} 
} 
//测试访问 
[[email protected] ~]# curl http://192.168.1.1/request_path/code/index.html 
Alias 
//实际访问本地路径 
/local_path/code/'index.html' 
Root and alias are used to specify the path of webpage storage, but root can use a relative path, alias must be an absolute path

Get the user's real IP

Nginx passes the user's real IP address
Insert picture description here


$remote_addr can only get the IP of the last server to access
x_forwarded_for header information is easy to be tampered with

Get the real IP address of the client?

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Common HTTP status codes

200 Normal request
301 Permanent jump
302 Temporary jump
400 Request parameter error
401 Account password error (authorization required)
403 Permission denied (forbidden)
404 File not found (Not Found)
413 User upload file size limit (Request) Entity Too Large)
502 Back-end service not responding (bad gateway)
504 Back-end service execution timeout

If all the staff in a building connect to the Internet through 1 IP public connection, a total of 100 devices, when all people request a website at the same time, and refresh it 5 times, then request pv, ip, uv
pv: page views 500
uv: only device 100
ip: only out 1

Principles of website access


1. DNS process 1. Query local Hosts
2. Request local DNS
3. Return the corresponding IP
2. HTTP connection
1. Establish TCP three-time handshake, send request content, request header, request line, request body
2. Change The request is passed to the load balancer, and the load balancer does the corresponding scheduling
3. If the request is a static page, then it can be scheduled to the corresponding static cluster group
4. If the request is a dynamic page, the request is scheduled to the dynamic cluster Group
1. If it is just a request page, it may go through Opcache
2. If the requested page needs to query the database, or insert content into the database
3. Check whether the corresponding operation is query or write, if it is querying the database
4. Check Whether the content of the query is cached, if there is a cache, it will be returned
5. Check the query statement and return the query result
6. Memory cache Redis cache the corresponding query result
7. Return the content corresponding to the client request to the WEB node
8. WEB node After receiving the request, return the content to the load balancer
9. Load balancer returns the client content, TCP disconnected four times
3. HTTP disconnected
1. According to the hierarchical structure
CDN layer -> load layer -> WEB layer -> storage layer- >Caching layer->Database layer
At the same time, it should be noted that each layer has a corresponding caching mechanism

Nginx optimization plan

Nginx optimization
1.gzip compression
2.expires static file caching 3.Adjust
network IO model, adjust the maximum number of connections of Nginx worker process 5.Hide
Nginx name and version number
6.Configure anti-theft chain to prevent resource theft
7. Prohibit access via IP address, prohibit malicious domain name resolution, only allow domain name access
8. Prevent DDOS, cc attacks, restrict single IP concurrent request connection
9. Configure error page, specify the page according to the error code to give feedback to the user
10
.Restrict access to the uploaded resource directory by the program to prevent horse intrusion into the system 11. Nginx encrypted transmission optimization