Nginx install, implement reverse proxy and depth optimization

A, Nginx installation
on the basic concepts of Nginx, in the previous blog post: https://blog.51cto.com/14227204/2464167
a detailed introduction, this blog began to talk directly from the installation.

Preparing the environment:

  • Three centos 7.5, one of them running Nginx, the other two can run a simple web service, mainly used to test the effect of Nginx reverse proxy;
  • I provide the download package, when installed Nginx need to do caching and compression optimization items:
    * https://pan.baidu.com/s/1AJlDAkgdUd4uV1Bfjm46oA extraction code: gnlb

.

Note (to achieve the effect as follows):

  • Proxy binding module for backend web upstream and load balancing;
  • Use proxy module implements a static file caching;
  • Combined with nginx default comes ngx_http_proxy_module modules and ngx_http_upstream_module module implements a health check of the backend server, you can use third-party modules nginx_upstream_check_module;
  • Use nginx-sticky-module expansion module for holding a session;
  • Use ngx_cache_purge achieve a more powerful cache clear function;
  • Use ngx_brotli module for page file compression.

The above-mentioned two modules are all third-party expansion modules required in advance at the source (I include the download link several modules in front of), and then install by --add-moudle = src_path together compile time.
1, install Nginx

[root@nginx nginx-1.14.0]# yum -y erase httpd     #卸载系统默认的httpd服务,防止端口冲突
[root@nginx nginx-1.14.0]# yum -y install openssl-devel pcre-devel    #安装所需依赖
[root@nginx src]# rz          #rz命令上传所需源码包
[root@nginx src]# ls          #确认上传的源码包
nginx-sticky-module.zip    ngx_brotli.tar.gz
nginx-1.14.0.tar.gz  ngx_cache_purge-2.3.tar.gz
#将上传的源码包进行解压
[root@nginx src]# tar zxf nginx-1.14.0.tar.gz  
[root@nginx src]# unzip nginx-sticky-module.zip 
[root@nginx src]# tar zxf ngx_brotli.tar.gz 
[root@nginx src]# tar zxf ngx_cache_purge-2.3.tar.gz 
[root@nginx src]# cd nginx-1.14.0/        #切换至nginx目录
[root@nginx nginx-1.14.0]#  ./configure --prefix=/usr/local/nginx1.14 --user=www --group=www --with-http_stub_status_module  --with-http_realip_module  --with-http_ssl_module --with-http_gzip_static_module  --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy  --http-fastcgi-temp-path=/var/tmp/nginx/fcgi --with-pcre  --add-module=/usr/src/ngx_cache_purge-2.3  --with-http_flv_module --add-module=/usr/src/nginx-sticky-module && make && make install
#进行编译安装,并且使用“--add-module”选项加载需要的模块
#注意,以上并没有加载ngx_brotli模块,是为了稍后展示在已经安装nginx服务后,如何添加模块

Above compiler options are explained as follows:

  • --with-http_stub_status_module: by nginx web page to monitor the status of;
  • --with-http_realip_module: get the real IP address of the client;
  • --with-http_ssl_module: Open nginx encrypted transmission function;
  • --with-http_gzip_static_module: open compression;
  • --http-client-body-temp-path = / var / tmp / nginx / client: client access path for temporary storage of data (cache storage path);
  • --http-proxy-temp-path=/var/tmp/nginx/proxy:同上;
  • --http-fastcgi-temp-path=/var/tmp/nginx/fcgi:同上;
  • --with-pcre: support for regular expression matching;
  • --add-module = / usr / src / ngx_cache_purge-2.3: nginx add third party modules, the syntax is: - add-module = module to third paths;
  • --add-module=/usr/src/nginx-sticky-module:同上;
  • --with-http_flv_module: support flv video streams.

2, start the service Nginx

[root@nginx nginx-1.14.0]# ln -s /usr/local/nginx1.14/sbin/nginx /usr/local/sbin/
#创建nginx命令的软连接,以便可以直接使用
[root@nginx nginx-1.14.0]# useradd -M -s /sbin/nologin www
[root@nginx nginx-1.14.0]# mkdir -p /var/tmp/nginx/client
[root@nginx nginx-1.14.0]# nginx -t      #检查nginx配置文件
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test is successful
[root@nginx nginx-1.14.0]# nginx       #启动nginx服务
[root@nginx nginx-1.14.0]# netstat -anpt | grep ":80"    #查看80端口是否在监听
tcp   0   0 0.0.0.0:80      0.0.0.0:*        LISTEN      7584/nginx: master  

Two, Nginx reverse proxy service implementation
before implementing this reverse proxy, here is what to say, what is a reverse proxy? What is a forward proxy?

1, forward proxy
HTTP proxy for Internet connection request to the internal network (e.g. the NAT), the client to specify the proxy server, and would be sent directly to the destination Web server transmits a request to the first proxy server, then the proxy server to access the Web server and Web server back to the customer information returned by side, at this time, the proxy server is to forward proxy.

2, reverse proxy
and a forward proxy Conversely, if the LAN resources to Internet, and let other users on the Internet can access the LAN resources, it can also set up a proxy server, which provides services that reverse proxy Reverse proxy server accepts connections from the Internet, and then forwards the request to the server on the internal network, and returns the information back to a web server
on the Internet requesting client connections.

In short: forward proxy object is a client, instead of the client to access the web server; reverse proxy object is a web server, web proxy server to respond to client.

3, Nginx reverse proxy configuration
can be configured nginx as a reverse proxy and load balancing, while taking advantage of its caching capabilities, nginx cache static page in order to achieve the objective of reducing the number of connections in the back-end server and check the health of the back-end web server .

Environment are as follows:

  • Nginx as a reverse proxy server;
  • Two back-end web servers web server pool;
  • Client Access Nginx proxy server, you can refresh the page several times to obtain a different back-end web pages returned by the server.

Begin to configure Nginx server:

[root@nginx ~]# cd /usr/local/nginx1.14/conf/      #切换至指定目录
[root@nginx conf]# vim nginx.conf           #编辑主配置文件
             ........................#省略部分内容
http{
             ........................#省略部分内容
upstream backend {
        sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }
            ........................#省略部分内容
server {
location / {
            #root   html;                            #将原本的根目录注释掉 
            #index  index.html index.htm;        #注释掉改行
            proxy_pass http://backend;     #这里指定的“backend”须与上面的web池名称对应。
        }
   }
}
#编辑完成后,保存退出即可。
[root@nginx conf]# nginx -t            #检查配置文件,确认无误
[root@nginx conf]# nginx -s reload        #重启nginx服务,以便生效

Configure the web server in the pool there is a "sticky" configuration items, in fact, loaded with nginx-sticky module, the role of this module is by way of adhesive cookie requests from the same client (browser) is sent to the the same process on the back-end server, multiple backend servers can solve the problem of synchronization of the session so to a certain extent (so-called session synchronization, like than to access the page, sign in once, without having to log in again to a certain period of time, which is concept of a session), and RR polling mode operation and maintenance personnel must consider their own session to achieve synchronization. Also built in ip_hash can also be achieved based on the client IP to distribute the request, but it is likely to cause load imbalance situation, and if the previous nginx have access from the same local area network, it receives client IP is the same, likely to cause load imbalances. cookie expiration time nginx-sticky-module, the default browser is closed expired.
This module is not appropriate or does not support Cookie cookie manually disabled browser, then will switch to a default sticky RR. It can not be used with ip_hash.

sticky just Nginx support of one of the scheduling algorithm, the following other scheduling algorithms Nginx load balancing module supports:

  • Polling (default, RR): each request individually assigned to a different time order back-end server, if a back-end server is down, the system failure is automatically removed, so that the user access is not affected. Polling a higher weight Weight, Weight larger value, the probability assigned to the access, is mainly used for server performance uneven rear end of each case.
  • ip_hash: Each request is assigned by hash result Access IP, so that visitors from a fixed IP to access the same back-end server, an effective solution to the problem of sharing session dynamic pages. Of course, if the node is not available, and will be sent to the next node, but this time there is no word on the cancellation of the synchronization session out.
  • least_conn: request is sent to the currently active connection least realserver. We will consider the value of weight.
  • url_hash: The results of this hash method by the allocation request to access url, each url directed to the same back-end server, efficiency can be further improved backside cache servers. Nginx is not itself support url_hash, if need to use this scheduling algorithm, hash nginx_upstream_hash Nginx package must be installed.
  • fair: It is more than the above two intelligent load balancing algorithm. This algorithm can be based on the page size and load duration intelligently load balancing, which is based on an allocation request to the backend server response time, short response time priority allocation. Nginx does not support the fair itself, if need to use this scheduling algorithm, you must download the Nginx upstream_fair module.
    .
    Configuration explanation behind the above-described profile web web server IP address pool:

  • weight: Polling weights also can be used in ip_hash default value 1;
  • max_fails: the number of permitted request failed, the default is one. When the maximum number of attempts to return proxy_next_upstream module defined error.
  • fail_timeout: has two meanings, one is allowed up to 2 times in the 10s failed; Second failures experienced after 2, which is not assigned to the requesting server within the time 10s.

Server web server pool configuration is as follows (for reference only, in order to test here, just simple to build a bit httpd service):

[root@web01 ~]# yum -y install httpd            #安装httpd服务
[root@web01 ~]# echo "192.168.20.2" > /var/www/html/index.html  #两台web服务器准备不同的网页文件
[root@web01 ~]# systemctl start httpd      #启动web服务

The second web server can be more than the same operation, but be prepared to pay attention to a different Web page file to test load balancing.

Now you can verify client access, but it should be noted that, nginx proxy server must be able to communicate and two wbe servers.

Access to test themselves on the nginx proxy server (you can see in the web server web server pool poll):
Nginx install, implement reverse proxy and depth optimization
If you use Windows clients to access test, because the configuration file "sticky" configuration, so will each refresh request is forwarded to the same web server, and load balancing can not test the effect, simply "sticky" comment out the line, you can test out load balancing.
Three, Nginx service optimization
so-called optimization, in addition to controlling their work outside thread, there are a few more important concept, that is, page caching and compression, due to the configuration items that are more involved, I will complete the field http {} profiles wrote the following, and comments at the end of Bowen would not attach a comment http {} field.

Before optimization, when I seem to compile and install Nginx, deliberately left out a module is not loaded, is to show the module is not loaded if needed, how to load?

Configuration is as follows:

[root@nginx conf]# cd /usr/src/nginx-1.14.0/     #切换至Nginx源码包
[root@nginx nginx-1.14.0]# nginx -V    #执行“ Nginx -V ”,查看已加载的模块
nginx version: nginx/1.14.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) 
built with OpenSSL 1.0.2k-fips  26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx1.14 --user=www --group=www --with-http_stub_status_module --with-http_realip_module --with-http_ssl_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fcgi --with-pcre --add-module=/usr/src/ngx_cache_purge-2.3 --with-http_flv_module --add-module=/usr/src/nginx-sticky-module
[root@nginx nginx-1.14.0]# ./configure --prefix=/usr/local/nginx1.14 --user=www --group=www --with-http_stub_status_module --with-http_realip_module --with-http_ssl_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client --http-proxy-temp-path=/var/tmp/nginx/proxy --http-fastcgi-temp-path=/var/tmp/nginx/fcgi --with-pcre --add-module=/usr/src/ngx_cache_purge-2.3 --with-http_flv_module --add-module=/usr/src/nginx-sticky-module --add-module=/usr/src/ngx_brotli && make
#将上述查到的已加载的模块复制以下,重新编译以下,同时,加上需要添加的模块
#如我在上面添加了第三方模块“--add-module=/usr/src/ngx_brotli”
[root@nginx nginx-1.14.0]# mv /usr/local/nginx1.14/sbin/nginx /usr/local/nginx1.14/sbin/nginx.bak
#将原本的Nginx控制文件更改名字,备份一下
[root@nginx nginx-1.14.0]# cp objs/nginx /usr/local/nginx1.14/sbin/    
#将新生成的Nginx命令移动到相应的目录下
[root@nginx nginx-1.14.0]# ln -sf /usr/local/nginx1.14/sbin/nginx /usr/local/sbin/  
#对新的nginx命令做软连接
[root@nginx ~]# nginx -s reload                  #nginx重启一下服务

At this point, add new modules to complete.
1, Nginx use of proxy cache
buffer that is, js, css, image and other static file cache to cache directory specified under nginx, not only can reduce the burden on the back-end server, you can speed up access from the back-end server, but this cache in a timely manner cleanup has become a problem, so it is necessary ngx_cache_purge this module before the expiration time yet to come, manually clear the cache.

proxy module and common instruction proxy_pass proxy_cache.
mainly, fastcgi_cache instruction set and the associated set of instructions to complete the nginx web caching function by proxy_cache, proxy_cache command responsible for the reverse proxy cache static content back-end server, fastcgi_cache mainly used to handle dynamic FastCGI process cache (not recommended for production environments dynamic page caching).

Configuration is as follows:

http {
 include       mime.types;
    default_type  application/octet-stream;
    upstream backend {
        sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }
   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'     #注意删除这行后面的分号。
                        '"$upstream_cache_status"';    #增加这一行,记录缓存的命中率到日志中
    access_log  logs/access.log  main;

        #增加以下几行配置
    proxy_buffering on;   #代理的时候,开启缓冲后端服务器的响应
    proxy_temp_path /usr/local/nginx1.14/proxy_temp;
    proxy_cache_path /usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;
# server字段配置如下:
server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location ~/purge(/.*) {    #这个purge字段用来实现手动清除缓存
        allow 127.0.0.1;
        allow 192.168.20.0/24;
        deny all;
        proxy_cache_purge my-cache $host$1$is_args$args;
        }
        location / {
            proxy_pass http://backend;
    #这个“/ ”字段中添加以下配置,以便配置缓存相关的
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
            proxy_cache my-cache;
            add_header Nginx-Cache $upstream_cache_status;
            proxy_cache_valid 200 304 301 302 8h;
            proxy_cache_valid 404 1m;
            proxy_cache_valid any 1d;
            proxy_cache_key $host$uri$is_args$args;
            expires 30d;
        }
}
#编辑完成后,保存退出即可
[root@nginx conf]# nginx -t        #检查配置文件
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax 
nginx: [emerg] mkdir() "/usr/local/nginx1.10/proxy_temp" failed (2: No suc
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test failed
#提示相应的目录没有找到
[root@nginx conf]# mkdir -p /usr/local/nginx1.10/proxy_temp    #那就创建相应的目录咯
[root@nginx conf]# nginx -t      #再次检查,OK了
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax  is ok
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test is successful
[root@nginx conf]# nginx -s reload         #重启Nginx服务

Client Access test (using a Google browser, before accessing press F12):
Nginx install, implement reverse proxy and depth optimization
Press the "F5" to refresh:
Nginx install, implement reverse proxy and depth optimization
MISS represents miss, the request is transmitted to the rear end; HIT cache hit (since the first visit, Nginx server and the page is not cached, it will be sent to the back-end web, the second time to refresh, Nginx have local cache, so is the "hIT", cache hit).

View Nginx access logs, you can also view information related to cache records:

[root@nginx conf]# tail ../logs/access.log      #查看访问日志

Nginx install, implement reverse proxy and depth optimization
Client access to the following address (the client must be in the location ~ / purge (/.*) allows network segment), before a cache miss, the cache manually remove Nginx server (if not successful, first manually remove about client browser's cache):
pictures of cut me wrong, I'm sorry, if you need to manually clear the cache, then the specified URL if the visit is "192.168.20.5/index.html", then when clearing the cache, you need to specify the URL is "192.168.20.5/purge/index.html", specify if the access URL is "192.168.20.5", then when manually clear the cache, you need to specify the URL is "192.168.20.5/purge/"
Nginx install, implement reverse proxy and depth optimization

Related part of the configuration explained above is as follows:

  • proxy_buffering [on | off]; agents when turned on or off in response to the back-end server buffer when the buffer is turned on, Nginx receive a response from the proxy server as soon as possible, and then stores it in the buffer.
  • proxy_temp_path: temporary cache directory. In response to the back-end is not directly returned to the client, but the first written to a temporary file, and then rename it as the cache on proxy_cache_path. 0.8.9 version after allowing the two temp and cache directories on different file system (partition), but in order to reduce the loss of performance is set to recommend them on a file system.
  • proxy_cache_path: set the cache directory, the directory where the file name is the MD5 value of cache_key.
  • levels=1:2 keys_zone=my-cache:50m 表示采用 2 级目录结构,第一层目录只有一个字符,是由levels=1:2设置,总共二层目录,子目录名字由二个字符组成。Web缓存区名称为my-cache,内存缓存空间大小为 100MB,这个缓冲 zone 可以被多次使用。文件系统上看到的缓存文件名类似于 /usr/local/nginx1.10/proxy_cache/c/29/b7f54b2df7773722d382f4809d65029c 。
  • inactive=600 max_size=2g 表示 600 分钟没有被访问的内容自动清除,硬盘最大缓存空间为2GB,超过这个值将清除最近最少使用的数据。
  • proxy_cache : 引用前面定义的缓存区 my-cache。
  • proxy_cache_key :定义如何生成缓存的键,设置 web 缓存的 key 值,nginx 根据 key 值 md5哈希存储缓存。
  • proxy_cache_valid : 为不同的响应状态码设置不同的缓存时间,比如 200、302 等正常结果可以缓存的时间长点,而 404、500 等缓存时间设置短一些,这个时间到了文件就会过期,而不论是否刚被访问过。
  • add_header 指令来设置 response header, 语法: add_header name value。
  • $upstream_cache_status 这个变量来显示缓存的状态,我们可以在配置中添加一个 http 头来显示这一状态。
  • ########### $upstream_cache_status 包含以下几种状态:############
  • MISS 未命中,请求被传送到后端;
  • HIT 缓存命中;
  • EXPIRED 缓存已经过期请求被传送到后端;
  • UPDATING 正在更新缓存,将使用旧的应答;
  • STALE 后端将得到过期的应答;
  • Expires: Expires provided in advance in response to: or Cache-Control: max-age, returned to the client's browser cache expiration.

2, optimization services Nginx compression
change the configuration file as follows (please refer to the related interpretation blog at the end of the text):

http {
    include       mime.types;
    default_type  application/octet-stream;
    brotli on;
    brotli_types text/plain text/css text/xml application/xml application/json;
    brotli_static off;       #是否允许查找预处理好的、以 .br结尾的压缩文件,可选值为on、off、always。
    brotli_comp_level 11;        #压缩的级别,范围是“1~14”,值越大,压缩比越高
    brotli_buffers 16 8k;      #读取缓冲区数量和大小
    brotli_window 512k;       #滑动窗口大小
    brotli_min_length 20;    #指定压缩数据的最小字节
    gzip  on;        #开启 gzip 压缩输出,减少网络传输。
    gzip_comp_level 6;     # gzip 压缩比,1 压缩比最小处理速度最快,9 压缩比最大但处理速度最慢(传输快但比较消耗 cpu)。
    gzip_http_version 1.1;    #用于识别 http 协议的版本,早期的浏览器不支持 Gzip 压缩,用户就会看到乱码,所以为了支持前期版本加上了这个选项,如果你用了 Nginx 的反向代理并期望也启用 Gzip 压缩的话,由于末端通信是 http/1.1协议,故请设置为 1.1。
    gzip_proxied any;     #Nginx 作为反向代理的时候启用,根据某些请求和应答来决定是否在对代理请求的应答启用 gzip 压缩,是否压缩取决于请求头中的“Via”字段,指令中可以同时指定多个不同的参数,意义如下:
# off – 关闭所有的代理结果数据的压缩
# expired – 启用压缩,如果 header 头中包含 “Expires” 头信息
# no-cache – 启用压缩,如果 header 头中包含 “Cache-# Control:no-cache” 头信息
# no-store – 启用压缩,如果 header 头中包含 “Cache-Control:no-store” 头信息
# private – 启用压缩,如果 header 头中包含 “Cache-Control:private” 头信息
# no_last_modified – 启用压缩,如果 header 头中不包含 “Last-Modified” 头信息
# no_etag – 启用压缩 ,如果 header 头中不包含 “ETag” 头信息
# auth – 启用压缩 , 如果 header 头中包含 “Authorization” 头信息
# any – 无条件启用压缩
    gzip_min_length 1k;
    gzip_buffers 16 8k;
    gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
    gzip_vary on;      #和 http 头有关系,加个 vary 头,给代理服务器用的,有的浏览器支持压缩,有的不支持,所以避免浪费不支持的也压缩,所以根据客户端的 HTTP 头来判断,是否需要压缩
    client_max_body_size 10m;     #允许客户端请求的最大单文件字节数。如果有上传较大文件,请设置它的限制值
    client_body_buffer_size 128k;    #缓冲区代理缓冲用户端请求的最大字节数
        server_tokens off;     #隐藏 nginx 的版本号
        #以下是http_proxy模块:
    proxy_connect_timeout 75;      #nginx 跟后端服务器连接超时时间(代理连接超时)
    proxy_send_timeout 75;
    proxy_read_timeout 75;    #定义从后端服务器读取响应的超时。此超时是指相邻两次读操作之间的最长时间间隔,而不是整个响应传输完成的最长时间。如果后端服务器在超时时间段内没有传输任何数据,连接将被关闭。
    proxy_buffer_size 4k;    #设置缓冲区的大小为 size。nginx 从被代理的服务器读取响应时,使用该缓冲区保存响应的开始部分。这部分通常包含着一个小小的响应头。该缓冲区大小默认等于 proxy_buffers 指令设置的一块缓冲区的大小,但它也可以被设置得更小。
    proxy_buffers 4 32k;     #语法: proxy_buffers the_number is_size;为每个连接设置缓冲区的数量为 number,每块缓冲区的大小为 size。这些缓冲区用于保存从被代理的服务器读取的响应。每块缓冲区默认等于一个内存页的大小。这个值是 4K 还是8K,取决于平台。
#附:[root@nginx ~]# getconf PAGESIZE     #查看Linux内存页的大小
#4096

    proxy_busy_buffers_size 64k;    #高负荷下缓冲大小(默认大小是 proxy_buffers 指令设置单块缓冲大小的 2 倍)
    proxy_temp_file_write_size 64k;    #当缓存被代理的服务器响应到临时文件时,这个选项限制每次写临时文件的大小。
    proxy_buffering on;
    proxy_temp_path /usr/local/nginx1.14/proxy_temp;
    proxy_cache_path /usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;
    upstream backend {
       sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
                      '"$upstream_cache_status"';
    access_log  logs/access.log  main;
    sendfile        on;     #开启高效文件传输模式。
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;        #长连接超时时间,单位是秒,长连接请求大量小文件的时候,可以减少重建连接的开销,如果设置时间过长,用户又多,长时间保持连接会占用大量资源。
    server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location ~/purge(/.*) {
        allow 127.0.0.1;
        allow 192.168.20.0/24;
        deny all;
        proxy_cache_purge my-cache $host$1$is_args$args;
        }
        location / {
            proxy_pass http://backend;    #请求转向 backend 定义的服务器列表,即反向代理,对应 upstream 负载均衡器。也可以proxy_pass http://ip:port。
            proxy_redirect off;     #指定是否修改被代理服务器返回的响应头中的 location 头域跟 refresh 头域数值
#例如:
# 设置后端服务器“Location”响应头和“Refresh”响应头的替换文本。 假设后端服务器返回的
# 响应头是 “Location: http://localhost:8000/two/some/uri/”,那么指令proxy_redirect  
# http://localhost:8000/two/ http://frontend/one/;将把字符串改写为 “Location: 
# http://frontend/one/some/uri/”。
            proxy_set_header Host $host;  #允许重新定义或者添加发往后端服务器的请求头。
#Host 的含义是表明请求的主机名,nginx 反向代理服务器会向后端真实服务器发送请求,
#并且请求头中的host字段重写为proxy_pass指令设置的服务器。因为nginx作为反向代理使
#用,而如果后端真实的服务器设置有类似防盗链或者根据 http 请求头中的 host 字段来进行
#路由或判断功能的话,如果反向代理层的nginx不重写请求头中的host字段,将会导致请求失败。
            proxy_set_header X-Real-IP $remote_addr;        
#web 服务器端获得用户的真实 ip 但是,实际上要获得用户的真实 ip,也可以通过下面的X-Forward-For
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#后端的 Web服务器可以通过 X-Forwarded-For 获取用户真实 IP,X_Forward_For 字段
#表示该条 http 请求是有谁发起的?如果反向代理服务器不重写该请求头的话,那么后端
#真实服务器在处理时会认为所有的请求都来自反向代理服务器,如果后端有防护策略
#的话,那么机器就被封掉了。因此,在配置用作反向代理的 nginx 中一般会增加两条配置,以便修改 http 的请求头部
          #以下两条是修改 http 的请求头部:
            proxy_set_header Host $host;
                        proxy_set_header X-Forward-For $remote_addr;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
#增加故障转移,如果后端的服务器返回 502、504、执行超时等错误,
#自动将请求转发到upstream 负载均衡池中的另一台服务器,实现故障转移。
            proxy_cache my-cache;
            add_header Nginx-Cache $upstream_cache_status;
            proxy_cache_valid 200 304 301 302 8h;
            proxy_cache_valid 404 1m;
            proxy_cache_valid any 1d;
            proxy_cache_key $host$uri$is_args$args;
            expires 30d;
                }
   location /nginx_status {        
                stub_status on;
                access_log off;
                allow 192.168.31.0/24;
                deny all;
            }
          ....................#省略部分内容
}
#更改完成后保存退出即可
[root@nginx nginx1.14]# nginx -t     #检查配置文件
nginx: the configuration file /usr/local/nginx1.14/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx1.14/conf/nginx.conf test is successful
[root@nginx nginx1.14]# nginx -s reload        #重启Nginx服务

Authentication:
1, visit the following address, you can view the status of Nginx server statistics page:

Nginx install, implement reverse proxy and depth optimization
2, see GZIP function is enabled:

Nginx install, implement reverse proxy and depth optimization
3, br compression test function is enabled (mode requires access to the command line):
Nginx install, implement reverse proxy and depth optimization
Additional: http {} field, server {} field None Notes profile is as follows:

http {
    include       mime.types;
    default_type  application/octet-stream;
    brotli on;
    brotli_types text/plain text/css text/xml application/xml application/json;
    brotli_static off;
    brotli_comp_level 11;
    brotli_buffers 16 8k;
    brotli_window 512k;
    brotli_min_length 20;
    gzip  on;
    gzip_comp_level 6;
    gzip_http_version 1.1;
    gzip_proxied any;
    gzip_min_length 1k;
    gzip_buffers 16 8k;
    gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
    gzip_vary on;
    client_max_body_size 10m;
    client_body_buffer_size 128k;
    server_tokens off;
    proxy_connect_timeout 75;
    proxy_send_timeout 75;
    proxy_read_timeout 75;
    proxy_buffer_size 4k;
    proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
    proxy_temp_file_write_size 64k;
    proxy_buffering on; 
    proxy_temp_path /usr/local/nginx1.14/proxy_temp;
    proxy_cache_path /usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;
    upstream backend {
       sticky;
        server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s;
        server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s;
    }   

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'
                      '"$upstream_cache_status"';
    access_log  logs/access.log  main;
    sendfile        on; 
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65; 

    #gzip  on;
   server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location ~/purge(/.*) {
        allow 127.0.0.1;
        allow 192.168.20.0/24;
        deny all;
        proxy_cache_purge my-cache $host$1$is_args$args;
        }
        location / {
            proxy_pass http://backend;
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
            proxy_cache my-cache;
            add_header Nginx-Cache $upstream_cache_status;
            proxy_cache_valid 200 304 301 302 8h;
            proxy_cache_valid 404 1m;
            proxy_cache_valid any 1d;
            proxy_cache_key $host$uri$is_args$args;
            expires 30d;
        }
            location /nginx_status {
                stub_status on;
                access_log off;
                allow 192.168.20.0/24;
                deny all;
            }

        location = /50x.html {
            root   html;
        }
     }
}

Guess you like

Origin blog.51cto.com/14227204/2464333