Configure Nginx nginx Proxy Load Balancing

1, the agent module

ngx_http_proxy_module

2, proxy configuration

代理
Syntax: 	proxy_pass URL;				   #代理的后端服务器URL
Default: 	—
Context: 	location, if in location, limit_except


头信息
Syntax: 	proxy_set_header field value;
Default: 	proxy_set_header Host $proxy_host;		#设置真实客户端地址
            proxy_set_header Connection close;
Context: 	http, server, location

超时
Syntax: 	proxy_connect_timeout time;
Default: 	proxy_connect_timeout 60s;				#链接超时
Context: 	http, server, location

Syntax: 	proxy_read_timeout time;
Default: 	proxy_read_timeout 60s;
Context: 	http, server, location

Syntax: 	proxy_send_timeout time; #nginx进程向fastcgi进程发送request的整个过程的超时时间
Default: 	proxy_send_timeout 60s;
Context: 	http, server, location

3, enable the nginx proxy proxy
nginx-1 launch site (content) (as a web server) ·
ip 192.168.62.157

server {
        listen 80;
        server_name localhost;
        location / {
               root /home/www/html;
               index index.html index.hml;
        }
}

nginx-2 Start the agent
ip nginx-2 is: 192.168.62.159

server {
    listen       80;
    server_name  localhost;

    location / {
    proxy_pass http://192.168.62.157:80;
#    proxy_redirect default;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_connect_timeout 30;
    proxy_send_timeout 60;
    proxy_read_timeout 60;
    }
}

c, nginx proxy configuration in detail specific

proxy_pass :真实服务器的地址,可以是ip也可以是域名和url地址
#proxy_redirect :如果真实服务器使用的是的真实IP:非默认端口。则改成IP:默认端口。(可选)
proxy_set_header:重新定义或者添加发往后端服务器的请求头
proxy_set_header X-Real-IP :启用客户端真实地址(否则日志中显示的是代理在访问网站)
proxy_set_header X-Forwarded-For:记录代理地址

proxy_connect_timeout:后端服务器连接的超时时间发起三次握手等候响应超时时间
proxy_send_timeout:后端服务器数据回传时间就是在规定时间之内后端服务器必须传完所有的数据
proxy_read_timeout :nginx接收upstream(上游/真实) server数据超时, 默认60s, 如果连续的60s内没有收到1个字节, 连接关闭。像长连接

Here Insert Picture Description
192.168.62.159 proxy server address

  1. 168.62.1 client address.

Successful visit. Record the IP client IP and proxy servers

The role of load balancing

If your server nginx web server to do two proxy, load balancing algorithm based on polling, then when you shut down one machine web program can not create web access, then nginx server to distribute or give this request can not access the web server , if the response time of connection here too long, it will result in the client's page has been waiting for a response to the user experience hit discount, here how we avoid this from happening yet. I'm here with Photo to illustrate the next problem. ,
Here Insert Picture Description
If such a situation occurs in which web2 load balancing, nginx will first go web1 request, but in the case of poorly configured nginx will continue to distribute requests to web2, web2 then wait for a response until our response time expires, the request will redistribute web1, here too long a response time, a user waiting time will be longer
2, upstream configuration

First of all tell you that the next upstream configuration, this configuration is a set of written proxy server address, and then configure load balancing algorithm. Here is the proxy server address There are two kinds of writing.

upstream youngfitapp { 
      server 192.168.62.157:8080;
      server 192.168.62.158:8080;
    }
 server {
        listen 80;
        server_name localhost;
        location / {         
           proxy_pass  http://youngfitapp;
        }
}
3, the load balancing algorithm

upstream supports 4 load balancing scheduling algorithm:

A, 轮询(默认): each request individually assigned to a different time order back-end server;

B, ip_hash: assign each request to access the IP press hash results, with a fixed IP client to access a back-end server. Ensures that requests from the same ip is hit on the stationary machine can solve the session problem.

C, url_hash: url access according to the results of hash allocation request, each url directed to the same back-end server. Back-end server for the cache time efficiency.

D, fair: which is more than the above two intelligent load balancing algorithm. This algorithm can be based on the page size and load duration intelligently load balancing, which is based on an allocation request to the backend server response time, short response time priority allocation. NginxItself is not supported fair, if necessary using this scheduling algorithm, you must download the Nginx upstream_fairmodule.

3, a configuration example

1, Hot Standby: If you have two servers, one server when the accident occurred, was to enable the second server to provide services. The server processing the request order: AAAAAA A sudden hang it, BBBBBBBBBBBBBB ...

upstream myweb { 
      server 192.168.62.157:8080; 
      server 192.168.62.158:8080 backup;  #热备     
    }

2, the polling: nginx default is polling its weight is 1 by default, the server processing the request order: ABABABABAB ...

upstream myweb { 
      server 192.168.62.157:8080; 
      server 192.168.62.158:8080;      
    }

3, the weighted polling: with the configuration according to the weight and size of the different number of distributed servers request different. If not set, the default is 1. The following server request order is: ABBABBABBABBABB ...

3、加权轮询:跟据配置的权重的大小而分发给不同服务器不同数量的请求。如果不设置,则默认为1。下面服务器的请求顺序为:ABBABBABBABBABB....

4, ip_hash: the same server nginx will make the same ip client requests.

upstream myweb { 
      server 192.168.62.157:8080; 
      server 192.168.62.158:8080;
      ip_hash;
}

5, nginx load balancing configuration state parameters

  • down, it indicates the current server time being does not participate in load balancing.
  • backup, backup machine reserved. When all other non-backup machine failure or busy, does not request backup machine, so the pressure of the lightest machine.
  • max_fails, allowing the number of failed requests, the default is one. When the number exceeds the maximum, an error is returned.
  • fail_timeout, after a max_fails failures, the unit out of service time of seconds. max_fails can be used with fail_timeout.
 upstream myweb { 
      server 192.168.62.157:8080 weight=2 max_fails=2 fail_timeout=2;
      server 192.168.62.158:8080 weight=1 max_fails=2 fail_timeout=1;    
 }

1.9.0 nginx in time, adds a stream module configured to implement a four-layer protocol (network layer and transport layer) forwarding agents, load balancing. Usage stream modules with similar usage of http allows us to configure a set of protocols such as TCP or UDP listener, and then to forward our request by proxy_pass, upstream by adding multiple back-end services, load balancing.

#4层tcp负载
stream {
			upstream myweb {
                hash $remote_addr consistent;
                server 172.17.14.2:8080;
                server 172.17.14.3:8080;
        }
        server {
            listen 80;
            proxy_connect_timeout 10s;
            proxy_timeout 30s;
            proxy_pass myweb;
        }
}
nginx session remains

nginx session remains mainly in the following implementation.

1、ip_hash

ip_hash using the source address hashing algorithm, the same client requests are always sent to the same back-end server, unless the server is unavailable.

ip_hash syntax:

upstream backend {
    ip_hash;
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com down;
}

ip_hash simple to use, but has a problem:
When the back-end server is down, session session loss;
the same client will be forwarded to the same back-end server, can lead to load imbalances;

2、sticky_cookie_insert

Use sticky_cookie_insert enable session affinity, which can lead to requests from the same client to the same server is passed a group of servers. And ip_hash difference is that it is not based on IP to determine the client, but on the cookie to judge. So avoid these ip_hash in from the same client lead to load imbalances. (Need to introduce third-party modules to achieve)

sticky module (can also be interpreted based on the domain name to access)

grammar:

upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    sticky_cookie_insert srv_id expires=1h domain=3evip.cn path=/;
}  #访问域名 3evip.cn  会转到上面两台服务器上

server {
    listen 80;
    server_name 3evip.cn;
    location / {
		proxy_pass http://backen;
    }
}

Description:
the Expires: Set your browser to keep the cookie time
domain: the definition of cookie domain
path: the path defined for the cookie

3.jvm_route

jvm_route principle

  1. A start request over, with no session information, jvm_route according to the round robin method, a station to send tomcat above.

  2. tomcat add information on the session, and returned to the customer.

  3. Users request again, jvm_route see the name of the session back-end server, it passes the request to the corresponding server.

Published 48 original articles · won praise 18 · views 3653

Guess you like

Origin blog.csdn.net/wx912820/article/details/104856408