nigix of using a reverse proxy - Load Balancing (2)

Original Reference: https: //blog.csdn.net/zy1471162851/article/details/91795712

 

 

tip: nginx is a high performance http server / reverse proxy server and e-mail (IMAP / POP3) proxy server

nginx application scenarios (

http server, you can do a static web server;

Web Hosting;

Reverse proxy, load balancing. When the site's traffic reaches a certain level, a single server can not meet the user's request, the need to use multiple servers clusters can be used to make nginx reverse proxy. And multiple servers can share the load average, the situation will not be a high server load down a server while idle. )


windows commonly used commands

nginx.exe -s stop - stop


nginx advantages and disadvantages

Total memory is small, can achieve high concurrent connections, the processing speed of response.

Can be achieved http server, web hosting, reverse proxy, load balancing.

nginx configuration simple

It may not be exposed the real IP address of the server

 

A: Reverse Proxy

 

Reverse proxy (Reverse Proxy) mode refers to the proxy server to accept connection requests on the internet, and then forwards the request to the server on the internal network, and returns the result obtained from the server to the client on request internet connection, At this point the external proxy server on the performance of a reverse proxy server.

 

Start a Tomcat 127.0.0.1:8080

Use nginx reverse proxy 8080.zyhome.com jump directly to 127.0.0.1:8080
Host file new

127.0.0.1 8080.sea.com

127.0.0.1 8081.shan.com

 

 

nginx.conf configuration:

server {
        listen       80;
        server_name  8080.sea.com;
        location / {
            proxy_pass  http://127.0.0.1:8080;
            index  index.html index.htm;
        }
    }
     server {
        listen       80;
        server_name  8081.shan.com;
        location / {
            proxy_pass  http://127.0.0.1:8081;
            index  index.html index.htm;
        }
    }

 

 

 

Two: Load balancing:

  

    Polling (default) 
    each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed. 
    {backserver upstream 
    Server 192.168 . 0.14 ; 
    Server 192.168 . 0.15 ; 
    } 

    2 , specify the weight 
    specified probability polling, weight, and is proportional to the rate of access, for the case where unevenness backend server performance. 
    {backserver upstream 
    Server 192.168 . 0.14 weight = 10 ; 
    Server 192.168 . 0.15 weight = 10 ; 
    } 

    . 3 , the IP binding ip_hash 
    each access request in a hash result ip distribution, so that each guest access to a back-end server is fixed, can be solved session problems. 
    upstream backserver { 
    ip_hash; 
    Server 192.168 . 0.14 : 88 ; 
    Server 192.168 . 0.15 : 80 ; 
    } 

. 4 , Fair (third party) 
by the response time of the allocation request to the backend server, a short response time priority allocation. 
{backserver upstream 
Server server1; 
Server Server2; 
Fair; 
} 

. 5 , url_hash (third party) 
by hash results to access url allocation request, each url directed to the same back-end server, the back end server is effective when the cache. 
{backserver upstream 
Server squid1: 3128 ; 
Server squid2: 3128 ; 
the hash $ REQUEST_URI; 
hash_method CRC32; 
}

 

 

 

Configuration code:

upstream backserver 
  { server
127.0.0.1:8080; server 127.0.0.1:8081
; } server { listen 80; server_name www.zyhome.com; location / { proxy_pass http://backserver; index index.html index.htm;        }     }

 Down in rotation configuration rules

upstream backserver 
  {
     server 127.0.0.1:8080;
     server 127.0.0.1:8081;
    }

 

server {
        listen       80;
        server_name  www.sea.com;
        location / {
                 proxy_pass  http://backserver;
                 index  index.html index.htm;
                 proxy_connect_timeout 1;
                 proxy_send_timeout 1;
                 proxy_read_timeout 1;
            }     
        }

 

 

Three: Configure DDOS


Limit the number of requests

Setting Nginx, Nginx Plus connection request within a reasonable range of real user request. For example, if you feel that a normal user can request once every two seconds the login page, you can set up Nginx receiving a client request IP every two seconds (approximately equivalent to the requests per minute).

limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;

server {

...

location /login.html {

limit_req zone=one;

...

}

}

 

`limit_req_zone` one command set called a shared memory area to store a particular key status of the request, in the above example the client IP ($ binary_remote_addr). `limit_req` location blocks to achieve the purpose of limiting access by reference /login.html one shared memory area.


Original link: https: //blog.csdn.net/zy1471162851/article/details/91795712

Guess you like

Origin www.cnblogs.com/lshan/p/11596122.html