An article lets you understand nginx and lua scripts (Nginx detailed explanation)

An article lets you understand nginx and lua scripts (Nginx detailed explanation)

Static resource deployment
Rewrite address rewriting
Regular expression Reverse
proxy
Load balancing
Polling, weighted polling, ip_hash, url_hash, fair
Web cache
Environment deployment
Highly available environment
User authentication module...

nginx binary executable
nginx.conf configuration file
error.log error logging
access.log access logging

1. Nginx core configuration file structure

First of all, let's learn, our configuration file, nginx, conf is the most important configuration file, let's look at the content of the most basic configuration file.

worker_processes 1;
events {
    
    
worker_connections 1024;
}
http {
    
    
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
    
    
listen 80;
server_name localhost;
location / {
    
    
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
    
    
root html;
}
}
指令名 指令值; #全局块,主要设置Nginx服务器整体运行的配置指令
#events块,主要设置,Nginx服务器与用户的网络连接,这一部分对
Nginx服务器的性能影响较大
events {
    
    
指令名 指令值;
}
#http块,是Nginx服务器配置中的重要部分,代理、缓存、日志记录、
第三方模块配置...
http {
    
    
指令名 指令值;
server {
    
     #server块,是Nginx配置和虚拟主机相关的内容
指令名 指令值;
location / {
    
    
#location块,基于Nginx服务器接收请求字符串与
location后面的值进行匹配,对特定请求进行处理
指令名 指令值;
}
}
...
}

Summarize
Global block, events block, http block. Multiple server blocks can be configured in the http block, and multiple location blocks can be configured in each server block.

1. Global block

1. user command

user: used to configure the user and user group of the worker process running the Nginx server
insert image description here

2.work process instruction

master_process: Used to specify whether to start the working process.
insert image description hereworker_processes: Used to configure the number of worker processes generated by Nginx, which is
the key to the concurrent processing service of the Nginx server.

3.daemon

=daemon: Set whether Nginx starts as a daemon process.

insert image description here

4.pid

pid: The file path used to configure the process number ID storage of the current master process of Nginx.
insert image description hereinsert image description here

5.error_log

insert image description here

6.include

include: used to introduce other configuration files to make Nginx configuration more flexible
insert image description here

2.events block

1.accept_mutex

accept_mutex: Used to set Nginx network connection serialization

insert image description hereinsert image description here

2.multi_accept

multi_accept: Used to set whether to allow multiple network connections to be received at the same time
insert image description here

3.worker_connections

worker_connections: used to configure the maximum number of connections for a single worker process
insert image description here

4.use

use: Used to set which event driver the Nginx server chooses to process network messages..

insert image description here

3. http block

1. Define MIME-Type

insert image description hereinsert image description here

2. Custom service log

insert image description hereinsert image description hereinsert image description hereinsert image description hereinsert image description here

3.server block and location block

insert image description hereLet's take an example, access the following path and configure the conf file, the code is as follows
insert image description here

##全局块 begin##
#配置允许运行Nginx工作进程的用户和用户组
user www;
#配置运行Nginx进程生成的worker进程数
worker_processes 2;
#配置Nginx服务器运行对错误日志存放的路径
error_log logs/error.log;
#配置Nginx服务器允许时记录Nginx的master进程的PID文件路径和名
称
pid logs/nginx.pid;
#配置Nginx服务是否以守护进程方法启动
#daemon on;
##全局块 end##
##events块 begin##
events{
    
    
#设置Nginx网络连接序列化
accept_mutex on;
#设置Nginx的worker进程是否可以同时接收多个请求
multi_accept on;
#设置Nginx的worker进程最大的连接数
worker_connections 1024;
#设置Nginx使用的事件驱动模型
use epoll;
}
##events块 end##
##http块 start##
http{
    
    
#定义MIME-Type
include mime.types;
default_type application/octet-stream;
#配置允许使用sendfile方式运输
sendfile on;
#配置连接超时时间
keepalive_timeout 65;
#配置请求处理日志格式
log_format server1 '===>server1 access log';
log_format server2 '===>server2 access log';
##server块 开始##
include /home/www/conf.d/*.conf;
##server块 结束##
}
##http块 end##

=server1.conf

server{
    
    
#配置监听端口和主机名称
listen 8081;
server_name localhost;
#配置请求处理日志存放路径
access_log
/home/www/myweb/server1/logs/access.log server1;
#配置错误页面
error_page 404 /404.html;
#配置处理/server1/location1请求的location
location /server1/location1{
    
    
root /home/www/myweb;
index index_sr1_location1.html;
}
#配置处理/server1/location2请求的location
location /server1/location2{
    
    
root /home/www/myweb;
index index_sr1_location2.html;
}
#配置错误页面转向
location = /404.html {
    
    
root /home/www/myweb;
index 404.html;
}
}

server2.conf

server{
    
    
#配置监听端口和主机名称
listen 8082;
server_name localhost;
#配置请求处理日志存放路径
access_log
/home/www/myweb/server2/logs/access.log server2;
#配置错误页面,404.html做了定向配置
error_page 404 /404.html;
#配置处理/server1/location1请求的location
location /server2/location1{
    
    
root /home/www/myweb;
index index_sr2_location1.html;
}
#配置处理/server2/location2请求的location
location /server2/location2{
    
    
root /home/www/myweb;
index index_sr2_location2.html;
}
#配置错误页面转向
location = /404.html {
    
    
root /home/www/myweb;
index 404.html;
}
}

2. Nginx static resource deployment

Send an HTTP request through the browser
to realize a process of sending the request from the client to the server to obtain the required content and displaying the content
on the page. At this time, the content we request is divided into two types
, one is static resources and the other is dynamic resources.

Static resources refer to some files that actually exist on the server side
and can be directly displayed, such as common html pages, css files,
js files, pictures, videos and other resources; dynamic resources refer to files that actually exist on the server side but
want to obtain It needs to go through certain business logic processing, and display different
parts of the page according to different conditions , such as report data display, displaying relevant specific
data and other resources according to the currently logged-in user;

(1) Static resource configuration instructions
(2) Static resource configuration optimization
(3) Static resource compression configuration instructions
(4) Static resource cache processing
(5) Static resource access control, including cross-domain issues and anti-leeching issues

Configuration instructions for Nginx static resources

1.listen command

listen: Used to configure the listening port.

insert image description here

server{
    
    
listen 8080;
server_name 127.0.0.1;
location /{
    
    
root html;
index index.html;
}
}
server{
    
    
listen 8080 default_server;
server_name localhost;
default_type text/plain;
return 444 'This is a error request';
}

2. server_name command

server_name: used to set the virtual host service name.
insert image description here

Configuration method 1: exact match

server {
    
    
listen 80;
server_name www.itcast.cn www.itheima.cn;
...
}

insert image description here.

Configuration method 2: use wildcard configuration

The wildcard "*" is supported in server_name, but it should be noted that the wildcard cannot appear in
the middle of the domain name, and can only appear in the first or last paragraph, such as:

server {
    
    
listen 80;
server_name *.itcast.cn www.itheima.*;
# www.itcast.cn abc.itcast.cn www.itheima.cn
www.itheima.com
...
}

Configuration 3: Use regular expression configuration

Regular expressions can be used in server_name, and ~ is used as
the beginning of the regular expression string.

insert image description here

server{
    
    
listen 80;
server_name ~^www\.(\w+)\.com$;
default_type text/plain;
return 200 $1 $2 ..;
}

Match execution order

Since the server_name directive supports wildcards and regular expressions, in
a configuration file containing multiple virtual hosts, a name may be
successfully matched by the server_name of multiple virtual hosts. When this happens, who is the current request for? to deal with
it?

server{
    
    
listen 80;
server_name ~^www\.\w+\.com$;
default_type text/plain;
return 200 'regex_success';
}
server{
    
    
listen 80;
server_name www.itheima.*;
default_type text/plain;
return 200 'wildcard_after_success';
}
server{
    
    
listen 80;
server_name *.itheima.com;
default_type text/plain;
return 200 'wildcard_before_success';
}
server{
    
    
listen 80;
server_name www.itheima.com;
default_type text/plain;
return 200 'exact_success';
}
server{
    
    
listen 80 default_server;
server_name _;
default_type text/plain;
return 444 'default_server not found server';
}

insert image description hereinsert image description here

3. location command

server{
    
    
listen 80;
server_name localhost;
location / {
    
    
}
location /abc{
    
    
}

insert image description hereThe uri variable is the request string to be matched, which may not contain regular expressions or may contain regular
expressions. Then, when the nginx server searches for a matching location, it first uses
excluding for matching and finds a match The one with the highest degree of
matching is then matched by the one containing the regular expression. If the direct access can be matched, but not,
the location with the highest matching degree just now is used to process the request.

Without symbols, the requirement must start with the specified pattern

server {
    
    
listen 80;
server_name 127.0.0.1;
location /abc{
    
    
default_type text/plain;
return 200 "access success";
}
}
以下访问都是正确的
http://192.168.200.133/abc
http://192.168.200.133/abc?p1=TOM
http://192.168.200.133/abc/
http://192.168.200.133/abcdef

= : Before being used for uri that does not contain regular expressions, it must match the specified pattern exactly

server {
    
    
listen 80;
server_name 127.0.0.1;
location =/abc{
    
    
default_type text/plain;
return 200 "access success";
}
}
可以匹配到
http://192.168.200.133/abc
http://192.168.200.133/abc?p1=TOM
匹配不到
http://192.168.200.133/abc/
http://192.168.200.133/abcdef

~ and ~*

~: used to indicate that the current uri contains a regular expression, and is case-sensitive ~*: used to indicate that the
current uri contains a regular expression, and is not case-sensitive

server {
    
    
listen 80;
server_name 127.0.0.1;
location ~^/abc\w${
    
    
default_type text/plain;
return 200 "access success";
}
}
server {
    
    
listen 80;
server_name 127.0.0.1;
location ~*^/abc\w${
    
    
default_type text/plain;
return 200 "access success";
}
}

^~

^~: Used before the uri that does not contain regular expressions, the function is the same as that without symbols, the only difference is that
if the pattern matches, then stop searching for other patterns.

server {
    
    
listen 80;
server_name 127.0.0.1;
location ^~/abc{
    
    
default_type text/plain;
return 200 "access success";
}
}

Set the directory root/alias of the requested resource

insert image description here
For example:
(1) Create an images directory under the /usr/local/nginx/html directory, and
put a picture mv.png image in the directory

location /images {
    
    
root /usr/local/nginx/html;
}

 http://192.168.200.133/images/mv.png

root的处理结果是: root路径+location路径
/usr/local/nginx/html/images/mv.png
alias的处理结果是:使用alias路径替换location路径

location /images {
    
    
alias /usr/local/nginx/html/images;
}

(3) If the location path ends with /, the alias must also end with /, and root is not
required

location /images/ {
    
    
alias /usr/local/nginx/html/images/;
}

summary:

root的处理结果是: root路径+location路径
alias的处理结果是:使用alias路径替换location路径
alias是一个目录别名的定义,root则是最上层目录的含义。
如果location路径是以/结尾,则alias也必须是以/结尾,root没有要
求

4. index command

insert image description hereIndex can be followed by multiple settings. If no specific resource is specified when accessing,
it will be searched sequentially until the first one is found.

location / {
    
    
root /usr/local/nginx/html;
index index.html index.htm;
}
访问该location的时候,可以通过 http://ip:port/,地址后面如果
不添加任何内容,则默认依次访问index.html和index.htm,找到第一
个来进行返回

5. error_page instruction

error_page: Set the error page of the website

error_page code … [=[response]] uri;
http、server、location…

server {
    
    
error_page 404 http://www.itcast.cn;
}

server{
    
    
error_page 404 /50x.html;
error_page 500 502 503 504 /50x.html;
location =/50x.html{
    
    
root html;
}
}
server{
    
    
error_page 404 @jump_to_error;
location @jump_to_error {
    
    
default_type text/plain;
return 404 'Not Found Page...';
}
}

Optional = [response] is used to change the corresponding code to another

server{
    
    
error_page 404 =200 /50x.html;
location =/50x.html{
    
    
root html;
}
}
这样的话,当返回404找不到对应的资源的时候,在浏览器上可以看到,
最终返回的状态码是200,这块需要注意下,编写error_page后面的内
容,404后面需要加空格,200前面不能加空格

Cache processing of static resources

缓存(cache),原始意义是指访问速度比一般随机存取存储器(RAM)
快的一种高速存储器,通常它不像系统主存那样使用DRAM技术,而使用昂
贵但较快速的SRAM技术。缓存的设置是所有现代计算机系统发挥高性能的
重要因素之一。

What is web caching

Web缓存是指一个Web资源(如html页面,图片,js,数据等)存在于
Web服务器和客户端(浏览器)之间的副本。缓存会根据进来的请求保存
输出内容的副本;当下一个请求来到的时候,如果是相同的URL,缓存会
根据缓存机制决定是直接使用副本响应访问请求,还是向源服务器再次发
送请求。比较常见的就是浏览器会缓存访问过网站的网页,当再次访问这
个URL地址的时候,如果网页没有更新,就不会再次下载网页,而是直接
使用本地缓存的网页。只有当网站明确标识资源已经更新,浏览器才会再
次下载网页

browser cache

是为了节约网络的资源加速浏览,浏览器在用户磁盘上对最近请求过的文
档进行存储,当访问者再次请求这个页面时,浏览器就可以从本地磁盘显
示文档,这样就可以加速页面的阅览.

A caching implementation with the lowest cost
Reduce network bandwidth consumption
Reduce server pressure
Reduce network delay and speed up page opening

Execution process of browser caching
insert image description here

(1)用户首次通过浏览器发送请求到服务端获取数据,客户端是没有对
应的缓存,所以需要发送request请求来获取数据;
(2)服务端接收到请求后,获取服务端的数据及服务端缓存的允许后,
返回200的成功状态码并且在响应头上附上对应资源以及缓存信息;
(3)当用户再次访问相同资源的时候,客户端会在浏览器的缓存目录中
查找是否存在响应的缓存文件
(4)如果没有找到对应的缓存文件,则走(2)步
(5)如果有缓存文件,接下来对缓存文件是否过期进行判断,过期的判
断标准是(Expires),
(6)如果没有过期,则直接从本地缓存中返回数据进行展示
(7)如果Expires过期,接下来需要判断缓存文件是否发生过变化
(8)判断的标准有两个,一个是ETag(Entity Tag),一个是Last-Modified
(9)判断结果是未发生变化,则服务端返回304,直接从缓存文件中获
取数据
(10)如果判断是发生了变化,重新从服务端获取数据,并根据缓存协
商(服务端所设置的是否需要进行缓存数据的设置)来进行数据缓存。

expires command

expires: This directive is used to control the role of the page cache. You can use this command to control
"Expires" and "Cache-Control" in the HTTP response

insert image description here

add_header directive

insert image description here

Cache-control: must-revalidate
Cache-control: no-cache
Cache-control: no-store
Cache-control: no-transform
Cache-control: public
Cache-control: private
Cache-control: proxy-revalidate
Cache-Control: max-age=<seconds>
Cache-control: s-maxage=<seconds>

insert image description here

Nginx cross-domain problem solving

Same Origin Policy
The browser's same-origin policy: It is a convention, and it is the core and most basic security function of the browser
. If the browser lacks the same-origin policy, the normal functions of the browser may be affected
.

Homologous: the same protocol, domain name (IP) and port means the same origin

http://192.168.200.131/user/1
https://192.168.200.131/user/1
不
http://192.168.200.131/user/1
http://192.168.200.132/user/1
不
http://192.168.200.131/user/1
http://192.168.200.131:8080/user/1
不
http://www.nginx.com/user/1
http://www.nginx.org/user/1
不
ttp://192.168.200.131/user/1
http://192.168.200.131:8080/user/1
不
http://www.nginx.org:80/user/1
http://www.nginx.org/user/1
满足

cross-domain issues

有两台服务器分别为A,B,如果从服务器A的页面发送异步请求到服务器B获
取数据,如果服务器A和服务器B不满足同源策略,则就会出现跨域问题。

insert image description hereinsert image description hereinsert image description here
insert image description hereinsert image description hereinsert image description here
solution
insert image description here
To solve the cross-domain problem here, two headers need to be added, one is AccessControl-Allow-Origin, Access-Control-Allow-Methods Access
-Control-Allow-Origin: The literal translation is the source address information that allows cross-domain access,
You can configure multiple (separated by commas), or you can use * to represent all sources

To solve the cross-domain problem here, two headers need to be added, one is AccessControl-Allow-Origin, Access-Control-Allow-Methods Access
-Control-Allow-Origin: The literal translation is the source address information that allows cross-domain access,
You can configure multiple (separated by commas), or you can use * to represent all sources

location /getUser{
    
    
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods
GET,POST,PUT,DELETE;
default_type application/json;
return 200 '{
    
    "id":1,"name":"TOM","age":18}';
}

Static resource anti-leech

Resource hotlinking means that the content is not on your own server, but by technical means, bypassing other
people's restrictions, putting other people's content on your own page and finally displaying it to users. In order to steal
the space and traffic of large websites. In short, it is to use other people's things to make your own website.

Resource hotlinking means that the content is not on your own server, but by technical means, bypassing other
people's restrictions, putting other people's content on your own page and finally displaying it to users. In order to steal
the space and traffic of large websites. In short, it is to use other people's things to make your own website.

insert image description hereinsert image description here

location ~*\.(png|jpg|gif){
    
    
valid_referers none blocked www.baidu.com
192.168.200.222 *.example.com example.*
www.example.org ~\.google\.;
if ($invalid_referer){
    
    
return 403;
}
root /usr/local/nginx/html;
}

Anti-leeching for directories

location /images {
    
    
valid_referers none blocked www.baidu.com
192.168.200.222 *.example.com example.*
www.example.org ~\.google\.;
if ($invalid_referer){
    
    
return 403;
}
root /usr/local/nginx/html;
}

insert image description here

Rewrite function configuration

insert image description hereinsert image description here

Rewrite rules

set command
This command is used to set a new variable.
insert image description hereinsert image description hereinsert image description hereinsert image description hereinsert image description here
insert image description here

if instruction.

insert image description hereinsert image description hereinsert image description hereinsert image description here

if (-f $request_filename){
    
    
#判断请求的文件是否存在
}
if (!-f $request_filename){
    
    
#判断请求的文件是否不存在
}

insert image description here
break command

This directive is used to interrupt other Nginx configurations currently in the same scope. In the Nginx configuration in the same scope as this directive
, the configuration of the directive before it takes effect, and the configuration of the directive behind it
is invalid

insert image description here

location /{
    
    
if ($param){
    
    
set $id $1;
break;
limit_rate 10k;
}
}

return command
This instruction is used to complete the processing of the request and directly return the response status code to the client. All Nginx configuration after
return is invalid.

insert image description here

insert image description here
insert image description here

rewrite command

This directive alters the URI through the use of regular expressions. One or more instructions can exist at the same time
, and URLs are matched and processed sequentially.

insert image description hereinsert image description hereinsert image description hereinsert image description hereinsert image description here

When we match the rewrite/url path plus the path of any characters behind, it will jump to the Baidu connection, see the second line,When we match the test path, we will visit /$1, which means the test path in brackets
insert image description here

rewrite_log commandinsert image description here

==

insert image description here

The case of Rewrite

insert image description hereinsert image description here

Domain Mirroring

insert image description here

insert image description here

independent domain name

insert image description hereinsert image description hereinsert image description here

The directory automatically adds "/"

server {
    
    
listen 80;
server_name localhost;
location /hm {
    
    
root html;
index index.html;
}
}


insert image description here

server {
    
    
listen 80;
server_name localhost;
server_name_in_redirect on;
location /hm {
    
    
if (-d $request_filename){
    
    
rewrite ^/(.*)([^/])$ http://$host/$1$2/
permanent;
}
}
}

merge directory

insert image description hereinsert image description here

Anti-leech

File Anti-leech

server{
    
    
listen 80;
server_name www.web.com;
locatin ~* ^.+\.(gif|jpg|png|swf|flv|rar|zip)${
valid_referers none blocked server_names
*.web.com;
if ($invalid_referer){
rewrite ^/
http://www.web.com/images/forbidden.png;
}
}
}

Directory Anti-leech

server{
    
    
listen 80;
server_name www.web.com;
location /file/{
    
    
root /server/file/;
valid_referers none blocked server_names
*.web.com;
if ($invalid_referer){
    
    
rewrite ^/
http://www.web.com/images/forbidden.png;
}
}
}

Nginx reverse proxy

Nginx forward proxy case

insert image description here

http {
    
    
log_format main 'client send
request=>clientIp=$remote_addr serverIp=>$host';
server{
    
    
listen 80;
server_name localhost;
access_log logs/access.log main;
location {
    
    
root html;
index index.html index.htm;
}
}
}

insert image description here

server {
    
    
listen 82;
resolver 8.8.8.8;
location /{
    
    
proxy_pass http://$host$request_uri;
}
}

insert image description here
insert image description hereinsert image description here

Configuration syntax for Nginx reverse proxy

The commands of the Nginx reverse proxy module are parsed by the ngx_http_proxy_module module
, which has been added to Nginx when installing Nginx. Next, we
will introduce the commonly used commands in the reverse proxy one by one:

  • proxy_pass
  • proxy_set_header
  • proxy_redirect

proxy_pass

insert image description hereURL: It is the address of the proxy server to be set, including transmission protocol ( http , https:// ),
host name or IP address plus port number, URI and other elements.

If we do not add a slash to proxy_pass, the proxy request path will be spliced ​​with /server in location. If we add it, then we directly visit the index.html page

insert image description here

proxy_set_header

This instruction can change the request header information of the client request received by the Nginx server, and then
send the new request header to the proxy
server
insert image description hereinsert image description hereProxy server: [192.168.200.133]

insert image description here

proxy_redirect

This command is used to reset the values ​​of "Location" and "Refresh" in the header information.
insert image description hereServer [192.168.200.146]

insert image description hereProxy server [192.168.200.133]
insert image description here

insert image description hereproxy_redirect default;

insert image description here

Nginx Security Controls

insert image description here

How to Encrypt Traffic Using SSL

Translated into a familiar saying is to convert our commonly used http requests into https requests,
then the difference between the two is simply that both are HTTP protocols, but https is
http in an SSL shell.

HTTPS是一种通过计算机网络进行安全通信的传输协议。它经由HTTP进
行通信,利用SSL/TLS建立全通信,加密数据包,确保数据的安全性。

上述这两个是为网络通信提供安全及数据完整性的一种安全协议,TLSSSL在传输层和应用层对网络连接进行加密。

  • SSL (Secure Sockets Layer) secure socket layer
  • TLS (Transport Layer Security) transport layer security

If Nginx wants to use SSL, it needs to meet a condition, that is, it needs to add a module – withhttp_ssl_module, and this module needs the support of OpenSSL during the compilation process, which
we have prepared before.

》将原有/usr/local/nginx/sbin/nginx进行备份
》拷贝nginx之前的配置信息
》在nginx的安装源码进行配置指定对应模块 ./configure --
with-http_ssl_module

》通过make模板进行编译
》将objs下面的nginx移动到/usr/local/nginx/sbin下
》在源码目录下执行 make upgrade进行升级,这个可以实现不停机添
加新模块的功能

insert image description here

Nginx SSL related instructions

SSL

insert image description here

ssl_certificate

ssl_certificate: Specify a certificate with a PEM format certificate for the current virtual host
.

ssl_certificate_key

ssl_session_cache

ssl_session_timeout

ssl_ciphers

ssl_prefer_server_ciphers

generate certificate

insert image description hereinsert image description here

Start the SSL instance

server {
    
    
listen 443 ssl;
server_name localhost;
ssl_certificate server.cert;
ssl_certificate_key server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
    
    
root html;
index index.html index.htm;
}
}

insert image description here

insert image description here

Reverse proxy system tuning

insert image description here

相同点:
两种方式都是用来提供IO吞吐效率,都是用来提升Nginx代理的性能。
不同点:
缓冲主要用来解决不同设备之间数据传递速度不一致导致的性能低的问
题,缓冲中的数据一旦此次操作完成后,就可以删除。
缓存主要是备份,将被代理服务器的数据缓存一份到代理服务器,这样的
话,客户端再次获取相同数据的时候,就只需要从代理服务器上获取,效
率较高,缓存中的数据可以重复使用,只有满足特定条件才会删除.
(1)Proxy Buffer相关指令
》proxy_buffering :该指令用来开启或者关闭代理服务器的缓冲区;

insert image description here

》proxy_buffers:该指令用来指定单个连接从代理服务器读取响应的缓
存区的个数和大小。

number:缓冲区的个数
size:每个缓冲区的大小,缓冲区的总大小就是number*size

insert image description here

`
》proxy_buffer_size:该指令用来设置从被代理服务器获取的第一部分响
应数据的大小。保持与proxy_buffers中的size一致即可,当然也可以更
小。`

insert image description here

》proxy_busy_buffers_size:该指令用来限制同时处于BUSY状态的缓
冲总大小。

insert image description here

》proxy_temp_path:当缓冲区存满后,仍未被Nginx服务器完全接受,
响应数据就会被临时存放在磁盘文件上,该指令设置文件路径

insert image description here

》proxy_temp_file_write_size:该指令用来设置磁盘上缓冲文件的大
小。

insert image description here

proxy_buffering on;
proxy_buffer_size 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

load balancing

insert image description here

The principle and processing flow of load balancing

System expansion can be divided into vertical expansion and horizontal expansion.
Vertical expansion is from the stand-alone point of view,By increasing the hardware processing capability of the systemTo improve
the processing capacity of the server
Horizontal expansion isby adding the machineTo meet the processing capacity of large-scale website services.

The role of load balancing

1、解决服务器的高并发压力,提高应用程序的处理性能。
2、提供故障转移,实现高可用。
3、通过添加或减少服务器数量,增强网站的可扩展性。
4、在负载均衡器上进行过滤,可以提高系统的安全性。

Common processing methods for load balancing

Manually selected by the user

insert image description here

DNS polling method

域名系统(服务)协议(DNS)是一种分布式网络目录服务,主要用于域
名与 IP 地址的相互转换。
大多域名注册商都支持对同一个主机名添加多条A记录,这就是DNS轮
询,DNS服务器将解析请求按照A记录的顺序,随机分配到不同的IP上,
这样就能完成简单的负载均衡。DNS轮询的成本非常低,在一些不重要
的服务器,被经常使用。

insert image description hereinsert image description here
insert image description hereinsert image description here

Layer 4/7 load balancing

Before introducing the four/seven-layer load balancing, let's first understand a concept, OSI (open system
interconnection), called the open system interconnection model, which is
specified by the International Organization for Standardization ISO and is not based on specific models, operating systems or companies. network architecture. The model
divides the work of network communication into seven layers.

insert image description here

Nginx seven layer load balancing

Nginx要实现七层负载均衡需要用到proxy_pass代理模块配置。Nginx默
认安装支持这个模块,我们不需要再做任何处理。Nginx的负载均衡是
在Nginx的反向代理基础上把用户的请求根据指定的算法分发到一组
【upstream虚拟服务池】

Instructions for Nginx seven-layer load balancing

insert image description here
insert image description here

insert image description here

server settings

server {
    
    
listen 9001;
server_name localhost;
default_type text/html;
location /{
    
    
return 200 '<h1>192.168.200.146:9001</h1>';
}
}
server {
    
    
listen 9002;
server_name localhost;
default_type text/html;
location /{
    
    
return 200 '<h1>192.168.200.146:9002</h1>';
}
}
server {
    
    
listen 9003;
server_name localhost;
default_type text/html;
location /{
    
    
return 200 '<h1>192.168.200.146:9003</h1>';
}
}

Load Balancer Settings

upstream backend{
    
    
server 192.168.200.146:9091;
server 192.168.200.146:9092;
server 192.168.200.146:9093;
}
server {
    
    
listen 8083;
server_name localhost;
location /{
    
    
proxy_pass http://backend;
}
}

load balancing status

insert image description here

down

down: mark the server as permanently unavailable, then the proxy server will not participate in load
balancing.

upstream backend{
    
    
server 192.168.200.146:9001 down;
server 192.168.200.146:9002
server 192.168.200.146:9003;
}
server {
    
    
listen 8083;
server_name localhost;
location /{
    
    
proxy_pass http://backend;
}
}

backup

backup: Mark this server as a backup server, which will be used to deliver requests when the primary server is unavailable
.

upstream backend{
    
    
server 192.168.200.146:9001 down;
server 192.168.200.146:9002 backup;
server 192.168.200.146:9003;
}
server {
    
    
listen 8083;
server_name localhost;
location /{
    
    
proxy_pass http://backend;
}
}

insert image description here
insert image description here
insert image description here

max_fails和fail_timeout

max_conns

insert image description here

upstream backend{
    
    
server 192.168.200.133:9001 down;
server 192.168.200.133:9002 backup;
server 192.168.200.133:9003 max_fails=3
fail_timeout=15;
}
server {
    
    
listen 8083;
server_name localhost;
location /{
    
    
proxy_pass http://backend;
}
}

load balancing strategy

insert image description here
insert image description here

polling

insert image description here

weight weighted [weighted round robin]

insert image description here

ip_hash

When performing load balancing on multiple dynamic application servers at the backend, the ip_hash command can locate a
client IP request to the same backend server through a hash algorithm. In this way, when
a user from a certain IP logs in on the back-end web server A, when accessing other
URLs of the site, it can be guaranteed that the user accesses the back-end web server A.

insert image description here
insert image description here
insert image description here

least_conn

insert image description here

upstream backend{
    
    
least_conn;
server 192.168.200.146:9001;
server 192.168.200.146:9002;
server 192.168.200.146:9003;
}
server {
    
    
listen 8083;
server_name localhost;
location /{
    
    
proxy_pass http://backend;
}
}

insert image description here

url_hash

Allocate requests according to the hash result of the accessed url, so that each url is directed to the same backend server
, and it should be used in conjunction with cache hits. Multiple requests for the same resource may arrive at different
servers, resulting in unnecessary multiple downloads, low cache hit rate, and
waste of some resource time. Using url_hash can make the same url (that is, the same resource request)
reach the same server. Once the resource is cached, and then the request is received, it can be
read from the cache.

insert image description here

fair

Fair does not use the rotating balancing algorithm used by the built-in load balancing, but can
intelligently perform load balancing according to the page size and loading time. So how to use the fair load balancing strategy of the third-party module
.

upstream backend{
    
    
fair;
server 192.168.200.146:9001;
server 192.168.200.146:9002;
server 192.168.200.146:9003;
}
server {
    
    
listen 8083;
server_name localhost;
location /{
    
    
proxy_pass http://backend;
}
}

But how to use it directly will report an error, because fair belongs to the load balancing implemented by the third-party module. Need
to add nginx-upstream-fair, how to add the corresponding module:

insert image description here
insert image description here
insert image description here
insert image description here

Extended case

upstream backend{
    
    
server 192.168.200.146:9001;
server 192.168.200.146:9002;
server 192.168.200.146:9003;
}
server {
    
    
listen 80;
server_name localhost;
location /file/ {
    
    
rewrite ^(/file/.*) /server/$1 last;
}
location / {
    
    
proxy_pass http://backend;
}
}

Example of Layer 4 Load Balancing

insert image description here
insert image description here
insert image description here
insert image description here

stream {
    
    
upstream redisbackend {
    
    
server 192.168.200.146:6379;
server 192.168.200.146:6378;
}
upstream tomcatbackend {
    
    
server 192.168.200.146:8080;
}
server {
    
    
listen 81;
proxy_pass redisbackend;
}
server {
    
    
listen 82;
proxy_pass tomcatbackend;
}
}

Nginx cache integration

Cache is the buffer for data exchange (called: Cache). When users want to obtain data, they will
first query and obtain data from the cache. If there is any in the cache, it will be returned to the user directly
. Send a request to re-query the data from the server, return the data to the user and
put the data into the cache at the same time, the next time the user will directly get the data from the cache

insert image description here

Nginx web cache service

insert image description here

Related instructions for Nginx cache settings

The web cache service of Nginx is mainly
completed by using the related instruction set of the ngx_http_proxy_module module. Next, we will introduce the commonly used instructions.

proxy_cache_path

insert image description here
insert image description here

keys_zone:用来为这个缓存区设置名称和指定大小,如:

insert image description here

proxy_cache

insert image description here

proxy_cache_key

insert image description here

proxy_cache_valid

insert image description here

proxy_cache_min_uses

insert image description here

proxy_cache_methods

insert image description here

Nginx cache setting case

insert image description hereinsert image description here

 http{
    
    
proxy_cache_path /usr/local/proxy_cache
levels=2:1 keys_zone=itcast:200m inactive=1d
max_size=20g;
upstream backend{
    
    
server 192.168.200.146:8080;
}
server {
    
    
listen 8080;
server_name localhost;
location / {
    
    
proxy_cache itcast;
proxy_cache_key itheima;
proxy_cache_min_uses 5;
proxy_cache_valid 200 5d;
proxy_cache_valid 404 30s;
proxy_cache_valid any 1m;
add_header nginx-cache
"$upstream_cache_status";
proxy_pass http://backend/js/;
}
}
}

Clearing of Nginx cache

Method 1: Delete the corresponding cache directory

rm -rf /usr/local/proxy_cache/......

Using third-party extensions

 ngx_cache_purge

insert image description hereinsert image description hereinsert image description here

server{
    
    
location ~/purge(/.*) {
    
    
proxy_cache_purge itcast itheima;
}
}

insert image description here

Nginx sets resources not to cache

Earlier we have completed the use of Nginx as a web cache server. But we have to think about
a problem that not all data is suitable for caching. For example, for some
data that changes frequently. If caching is performed, it is easy to appear that the data accessed by the user is not
the real data of the server. Therefore, we need to filter these resources during the caching process
and not cache them.

proxy_no_cache

This instruction is used to define the conditions for not caching data.
insert image description here

proxy_no_cache $cookie_nocache $arg_nocache
$arg_comment;

proxy_cache_bypass

This instruction is used to set the condition not to fetch data from the cache.
insert image description hereinsert image description here

log_format params $cookie_nocache | $arg_nocache |
$arg_comment;
server{
    
    
listen 8081;
server_name localhost;
location /{
    
    
access_log logs/access_params.log params;
add_header Set-Cookie 'nocache=999';
root html;
index index.html;
}
}

Nginx implements server-side cluster construction

insert image description here

Environment preparation (Tomcat)

insert image description here

Environment preparation (Nginx)

insert image description here

insert image description here

Steps to achieve dynamic and static separation

insert image description here

insert image description here

Configure Nginx's access to static resources and dynamic resources

upstream webservice{
    
    
server 192.168.200.146:8080;
}
server {
    
    
listen 80;
server_name localhost;
#动态资源
location /demo {
    
    
proxy_pass http://webservice;
}
#静态资源
location ~/.*\.(png|jpg|gif|js){
    
    
root html/web;
gzip on;
}
location / {
    
    
root html/web;
index index.html index.htm;
}
}

insert image description here

Nginx implements Tomcat cluster construction

When using Nginx and Tomcat to deploy projects, we use an Nginx server
and a Tomcat server, and the effect diagram is as follows:

insert image description here

insert image description here

insert image description here

Nginx High Availability Solution

insert image description here

Keepalived

insert image description here

insert image description here

insert image description here

insert image description here
insert image description here

Introduction to Keepalived configuration file

insert image description here

global全局部分:
global_defs {
    
    
#通知邮件,当keepalived发送切换时需要发email给具体的邮箱
地址
notification_email {
    
    
tom@itcast.cn
jerry@itcast.cn
}
#设置发件人的邮箱信息
notification_email_from zhaomin@itcast.cn
#指定smpt服务地址
smtp_server 192.168.200.1
#指定smpt服务连接超时时间
smtp_connect_timeout 30
#运行keepalived服务器的一个标识,可以用作发送邮件的主题信
息
router_id LVS_DEVEL
#默认是不跳过检查。检查收到的VRRP通告中的所有地址可能会比较
耗时,设置此命令的意思是,如果通告与接收的上一个通告来自相同的
master路由器,则不执行检查(跳过检查)
vrrp_skip_check_adv_addr
#严格遵守VRRP协议。
vrrp_strict
#在一个接口发送的两个免费ARP之间的延迟。可以精确到毫秒级。
默认是0
vrrp_garp_interval 0
#在一个网卡上每组na消息之间的延迟时间,默认为0
vrrp_gna_interval 0
}
VRRP部分,该部分可以包含以下四个子模块
1. vrrp_script
2. vrrp_sync_group
3. 3. garp_group
4. vrrp_instance
我们会用到第一个和第四个,
#设置keepalived实例的相关信息,VI_1VRRP实例名称
vrrp_instance VI_1 {
    
    
state MASTER #有两个值可选MASTERBACKUPinterface ens33 #vrrp实例绑定的接口,用于发送VRRP[当前服务器使用的网卡名称]
virtual_router_id 51#指定VRRP实例ID,范围是0-255
priority 100 #指定优先级,优先级高的将成为
MASTER
advert_int 1 #指定发送VRRP通告的间隔,单位是秒
authentication {
    
     #vrrp之间通信的认证信息
auth_type PASS #指定认证方式。PASS简单密码认证(推
荐)
auth_pass 1111 #指定认证使用的密码,最多8}
virtual_ipaddress {
    
     #虚拟IP地址设置虚拟IP地址,供用户
访问使用,可设置多个,一行一个
192.168.200.222
}
}

server 1

global_defs {
    
    
notification_email {
    
    
tom@itcast.cn
jerry@itcast.cn
}
notification_email_from zhaomin@itcast.cn
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id keepalived1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    
    
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
    
    
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
    
    
192.168.200.222
}
}

server 2

! Configuration File for keepalived
global_defs {
    
    
notification_email {
    
    
tom@itcast.cn
jerry@itcast.cn
}
notification_email_from zhaomin@itcast.cn
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id keepalived2
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    
    
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
    
    
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
    
    
192.168.200.222
}
}

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

insert image description here

insert image description here
insert image description here

insert image description here

Nginx make download site

insert image description here

insert image description here

location /download{
    
    
root /usr/local;
autoindex on;
autoindex_exact_size on;
autoindex_format html;
autoindex_localtime on;
}

insert image description here

Nginx user authentication module

insert image description here
insert image description here

location /download{
    
    
root /usr/local;
autoindex on;
autoindex_exact_size on;
autoindex_format html;
autoindex_localtime on;
auth_basic 'please input your auth';
auth_basic_user_file htpasswd;
}

insert image description here

Nginx extension modules

Nginx is extensible and can be used to handle various usage scenarios. In this section, we will explore the use
of Lua to extend the functionality of Nginx.

Lua

insert image description here
insert image description here

insert image description here

insert image description here
insert image description here

insert image description here

insert image description here
insert image description here
insert image description here

insert image description here
insert image description here

insert image description here

insert image description here

insert image description here

insert image description here
insert image description here

insert image description here

insert image description here
insert image description here

insert image description here

insert image description here

insert image description here
insert image description here

insert image description here

insert image description here

insert image description here
insert image description here
insert image description here

insert image description here

ngx_lua operates Redis

insert image description here

insert image description here

lua-resty-redis提供了访问Redis的详细API,包括创建对接、连
接、操作、数据处理等。这些API基本上与Redis的操作一一对应。
(1)redis = require "resty.redis"2new
语法: redis,err = redis:new(),创建一个Redis对象。
(3)connect
语
法:ok,err=redis:connect(host,port[,options_table]),设
置连接Redis的连接信息。
ok:连接成功返回 1,连接失败返回nil
err:返回对应的错误信息
(4)set_timeout
语法: redis:set_timeout(time) ,设置请求操作Redis的超
时时间。
(5)close
语法: ok,err = redis:close(),关闭当前连接,成功返回1,
失败返回nil和错误信息
(6)redis命令对应的方法
在lua-resty-redis中,所有的Redis命令都有自己的方法,方
法名字和命令名字相同,只是全部为小写。
location / {
    
    
default_type "text/html";
content_by_lua_block{
    
    
local redis = require "resty.redis" -- 引入
Redis
local redisObj = redis:new() --创建Redis对象
redisObj:set_timeout(1000) --设置超时数据为1s
local ok,err =
redisObj:connect("192.168.200.1",6379) --设置redis连接
信息
if not ok then --判断是否连接成功
ngx.say("failed to connection redis",err)
return
end
ok,err = redisObj:set("username","TOM")--存入
数据
if not ok then --判断是否存入成功
ngx.say("failed to set username",err)
return
end
local res,err = redisObj:get("username") --从
redis中获取数据
ngx.say(res) --将数据写会消息体中
redisObj:close()
}
}

insert image description here

ngx_lua operates Mysql

insert image description here

insert image description here

driverClass=com.mysql.jdbc.Driver
url=jdbc:mysql://192.168.200.111:3306/nginx_db
username=root
password=123456
1)引入"resty.mysql"模块
local mysql = require "resty.mysql"2new
创建一个MySQL连接对象,遇到错误时,db为nil,err为错误描
述信息
语法: db,err = mysql:new()3)connect
尝试连接到一个MySQL服务器
语法:ok,err=db:connect(options),options是一个参数的
Lua表结构,里面包含数据库连接的相关信息
host:服务器主机名或IP地址
port:服务器监听端口,默认为3306
user:登录的用户名
password:登录密码
database:使用的数据库名
(4)set_timeout
设置子请求的超时时间(ms),包括connect方法
语法:db:set_timeout(time)5)close
关闭当前MySQL连接并返回状态。如果成功,则返回1;如果出现任
何错误,则将返回nil和错误描述。
语法:db:close()6)send_query
异步向远程MySQL发送一个查询。如果成功则返回成功发送的字节
数;如果错误,则返回nil和错误描述
语法:bytes,err=db:send_query(sql)7)read_result
从MySQL服务器返回结果中读取一行数据。res返回一个描述OK包
或结果集包的Lua,语法:
res, err, errcode, sqlstate = db:read_result()
res, err, errcode, sqlstate =
db:read_result(rows) :rows指定返回结果集的最大值,默认为4
如果是查询,则返回一个容纳多行的数组。每行是一个数据列的
key-value对,如
{
    
    
{
    
    id=1,username="TOM",birthday="1988-11-
11",salary=10000.0},
{
    
    id=2,username="JERRY",birthday="1989-11-
11",salary=20000.0}
}
如果是增删改,则返回类上如下数据
{
    
    
insert_id = 0,
server_status=2,
warning_count=1,
affected_rows=2,
message=nil
}
返回值:
res:操作的结果集
err:错误信息
errcode:MySQL的错误码,比如1064
sqlstate:返回由5个字符组成的标准SQL错误码,比如
42000

location /{
    
    
content_by_lua_block{
    
    
local mysql = require "resty.mysql"
local db = mysql:new()
local ok,err = db:connect{
    
    
host="192.168.200.111",
port=3306,
user="root",
password="123456",
database="nginx_db"
}
db:set_timeout(1000)
db:send_query("select * from users where id
=1")
local res,err,errcode,sqlstate =
db:read_result()
ngx.say(res[1].id..","..res[1].username..","..res[1].
birthday..","..res[1].salary)
db:close()
}
}

insert image description here
insert image description here

location /{
    
    
content_by_lua_block{
    
    
local mysql = require "resty.mysql"
local cjson = require "cjson"
local db = mysql:new()
local ok,err = db:connect{
    
    
host="192.168.200.111",
port=3306,
user="root",
password="123456",
database="nginx_db"
}
db:set_timeout(1000)
--db:send_query("select * from users where id
= 2")
db:send_query("select * from users")
local res,err,errcode,sqlstate =
db:read_result()
ngx.say(cjson.encode(res))
for i,v in ipairs(res) do
ngx.say(v.id..","..v.username..","..v.birthday..","..
v.salary)
end
db:close()
}
}

insert image description here

location /{
    
    
content_by_lua_block{
    
    
local mysql = require "resty.mysql"
local db = mysql:new()
local ok,err = db:connect{
    
    
host="192.168.200.1",
port=3306,
user="root",
password="123456",
database="nginx_db",
max_packet_size=1024,
compact_arrays=false
}
db:set_timeout(1000)
local res,err,errcode,sqlstate =
db:query("select * from users")
--local res,err,errcode,sqlstate =
db:query("insert into
users(id,username,birthday,salary)
values(null,'zhangsan','2020-11-11',32222.0)")
--local res,err,errcode,sqlstate =
db:query("update users set username='lisi' where id =
6")
--local res,err,errcode,sqlstate =
db:query("delete from users where id = 6")
db:close()
}
}

Comprehensive small case

insert image description here

init_by_lua_block{
    
    
redis = require "resty.redis"
mysql = require "resty.mysql"
cjson = require "cjson"
}
location /{
    
    
default_type "text/html";
content_by_lua_block{
    
    
--获取请求的参数username
local param = ngx.req.get_uri_args()
["username"]
--建立mysql数据库的连接
local db = mysql:new()
local ok,err = db:connect{
    
    
host="192.168.200.111",
port=3306,
user="root",
password="123456",
database="nginx_db"
}
if not ok then
ngx.say("failed connect to
mysql:",err)
return
end
--设置连接超时时间
db:set_timeout(1000)
--查询数据
local sql = "";
if not param then
sql="select * from users"
else
sql="select * from users where
username=".."'"..param.."'"
end
local
res,err,errcode,sqlstate=db:query(sql)
if not res then
ngx.say("failed to query from
mysql:",err)
return
end
--连接redis
local rd = redis:new()
ok,err =
rd:connect("192.168.200.111",6379)
if not ok then
ngx.say("failed to connect to
redis:",err)
return
end
rd:set_timeout(1000)
--循环遍历数据
for i,v in ipairs(res) do
rd:set("user_"..v.username,cjson.encode(v))
end
ngx.say("success")
rd:close()
db:close()
}
}

Guess you like

Origin blog.csdn.net/qq_51753851/article/details/131866384