Main application scenarios of Nginx

foreword

This article only focuses on what Nginx can handle without loading third-party modules. Because there are too many third-party modules, it can’t be introduced. Of course, this article itself may not be complete. After all, it’s just what I have personally used and learned about . So please forgive me, and welcome to leave a message to exchange

What can Nginx do

1. Reverse proxy

2. Load balancing

3. HTTP server (including dynamic and static separation)

4. Forward proxy

The above is what I learned that Nginx can handle without relying on third-party modules. The following details how to do each function

reverse proxy

Reverse proxy should be one of the things that Nginx does the most. What is a reverse proxy? The following is what Baidu Encyclopedia says: Reverse Proxy means that a proxy server accepts connection requests on the internet. Then the request is forwarded to the server on the internal network, and the result obtained from the server is returned to the client requesting connection on the Internet. At this time, the proxy server acts as a reverse proxy server to the outside world. Simply put, the real server cannot be directly accessed by the external network, so a proxy server is needed, and the proxy server can be accessed by the external network and is in the same network environment as the real server, of course, it may be the same server, port Just different. Paste a simple code to implement reverse proxy below

server {  
        listen       80;                                                        
        server_name  localhost;                                              
        client_max_body_size 1024M;

        location / {
            proxy_pass http://localhost:8080;
            proxy_set_header Host $host:$server_port;
        }
    }

After saving the configuration file, start Nginx, so that when we access localhost, it is equivalent to accessing localhost:8080

load balancing

Load balancing is also a commonly used function of Nginx. Load balancing means that it is allocated to multiple operation units for execution, such as Web servers, FTP servers, enterprise key application servers and other mission-critical servers, so as to complete work tasks together. To put it simply, when there are two or more servers, requests are randomly distributed to the specified servers for processing according to the rules. Generally, the load balancing configuration needs to configure the reverse proxy at the same time, and jump to the load balancing through the reverse proxy. Nginx currently supports 3 load balancing strategies and 2 commonly used third-party strategies.

1. RR (default)

Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

Simple configuration

    upstream test {
        server localhost:8080;
        server localhost:8081;
    }
    server {
        listen       81;                                                        
        server_name  localhost;                                              
        client_max_body_size 1024M;

        location / {
            proxy_pass http://test;
            proxy_set_header Host $host:$server_port;
        }
    }

The core code of load balancing is

    upstream test {
        server localhost:8080;
        server localhost:8081;
    }

Here I have configured 2 servers, of course, it is actually one, but the port is different, and the 8081 server does not exist, that is to say, it cannot be accessed, but when we access http://localhost , it is not There will be problems, and it will jump to http://localhost:8080 by default because Nginx will automatically determine the status of the server. If the server is inaccessible (the server hangs), it will not jump to this server, so it is also It avoids the situation that a server hangs and affects the use. Since Nginx is the RR policy by default, we do not need more settings.

2. Weight

Specify the polling probability, the weight is proportional to the access ratio, and is used when the performance of the backend server is uneven. E.g

 upstream test {
        server localhost:8080 weight=9;
        server localhost:8081 weight=1;
    }

Then in 10 times, only 1 time will visit 8081, and 9 times will visit 8080

3、ip_hash

There is a problem with the above two methods, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless (using session to save data), then there is a big problem. It is very problematic. For example, if the login information is saved in the session, then you need to log in again when jumping to another server. Therefore, many times we need a client to access only one server, so we need to use iphash, iphash Each request is allocated according to the hash result of the access ip, so that each visitor fixedly accesses a back-end server, which can solve the problem of session.

 upstream test {
        ip_hash;
        server localhost:8080;
        server localhost:8081;
    }

4. fair (third party)

Requests are allocated according to the response time of the backend server, and those with short response times are allocated first.

 upstream backend {
        fair;
        server localhost:8080;
        server localhost:8081;
    }

5. url_hash (third party)

Allocate requests according to the hash result of accessing URLs, so that each URL is directed to the same back-end server, which is more effective when the back-end server is cached. Add a hash statement to the upstream, and other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used

 upstream backend {
        hash $request_uri;
        hash_method crc32;
        server localhost:8080;
        server localhost:8081;
    }

The above five types of load balancing are applicable to different situations, so you can choose which strategy mode to use according to the actual situation, but fair and url_hash need to install third-party modules before they can be used. Since this article mainly introduces what Nginx can do, the Nginx installation is the first The third-party module will not be introduced in this article

HTTP server

Nginx itself is also a server for static resources. When there are only static resources, Nginx can be used as a server. At the same time, it is also very popular to separate static and dynamic resources. It can be implemented through Nginx. First, let’s take a look at Nginx as a static resource server.

    server {
        listen       80;                                                        
        server_name  localhost;                                              
        client_max_body_size 1024M;


        location / {
               root   e:wwwroot;
               index  index.html;
           }
    }

In this way, if you visit http://localhost , you will access index.html under the wwwroot directory of the E disk by default. If a website is only a static page, you can deploy it in this way.

Dynamic and static separation

Dynamic and static separation is to allow the dynamic web pages in a dynamic website to distinguish constant resources from frequently changing resources according to certain rules. After the dynamic and static resources are split, we can cache them according to the characteristics of static resources. This is the core idea of ​​website static processing

upstream test{  
       server localhost:8080;  
       server localhost:8081;  
    }  

    server {  
        listen       80;  
        server_name  localhost;  

        location / {  
            root   e:wwwroot;  
            index  index.html;  
        }  

        # 所有静态请求都由nginx处理,存放目录为html  
        location ~ .(gif|jpg|jpeg|png|bmp|swf|css|js)$ {  
            root    e:wwwroot;  
        }  

        # 所有动态请求都转发给tomcat处理  
        location ~ .(jsp|do)$ {  
            proxy_pass  http://test;  
        }  

        error_page   500 502 503 504  /50x.html;  
        location = /50x.html {  
            root   e:wwwroot;  
        }  
    }  

In this way, we can put HTML, pictures, css and js in the wwwroot directory, while tomcat is only responsible for processing jsp and requests. For example, when our suffix is ​​gif, Nginx will obtain the dynamic image file of the current request from wwwroot by default and return it , of course, the static files here are on the same server as Nginx. We can also configure it on another server through reverse proxy and load balancing. As long as you understand the most basic processes, many configurations are very simple. In addition, the localtion is actually a regular expression, so it is very flexible

forward proxy

Forward proxy, meaning a server between the client and the origin server, in order to get the content from the origin server, the client sends a request to the proxy and specifies the target (origin server), and then the proxy forwards to the origin server Request and return the obtained content to the client. The client can use the forward proxy. When you need to use your server as a proxy server, you can use Nginx to implement forward proxy, but at present Nginx has a problem, so it does not support HTTPS, although I have been to Baidu to configure HTTPS forward proxy, but in the end I still can't find the agent. Of course, it may be that my configuration is wrong, so I also hope that comrades who know the correct method will leave a message and explain.

 resolver 114.114.114.114 8.8.8.8;
    server {

        resolver_timeout 5s;

        listen 81;

        access_log  e:wwwrootproxy.access.log;
        error_log   e:wwwrootproxy.error.log;

        location / {
            proxy_pass http://$host$request_uri;
        }
    }

resolver is the DNS server that configures the forward proxy, and listen is the port of the forward proxy. Once configured, the server ip + port number can be used for proxying on IE or other proxy plugins.

Two last words

Nginx supports hot start, which means that when we modify the configuration file, we can make the configuration take effect without closing Nginx. Of course, I don’t know how many people know this. Anyway, I didn’t know it at first, which led to frequent killings. Start the Nginx thread again. . . The command for Nginx to re-read the configuration is

nginx -s reload

Below is the windows

nginx.exe -s reload

Reprinted from: https://mp.weixin.qq.com/s/uAqBgdF3tsORJtQayuz8mw

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324772567&siteId=291194637