Nginx way to achieve load balancing, which has several of it?

What is load balancing

When the greater the traffic unit of time a server, the server the greater the pressure, large enough to exceed their capacity, the server will crash. In order to avoid server crashes, allowing users to have a better experience, we have to share the server load balancing pressure by the way.

We can build many, many servers form a server cluster, when a user visits the site, to access an intermediate server, select the server in the middle so that the pressure of smaller servers in a server cluster, then the access request is introduced into the server. So since each user access, will ensure that each server in the server cluster pressure tends to balance, to share the pressure on the server, the server to avoid the crash.

 

Load balancing is using the principle of reverse proxy implementations.

Load balancing several common ways

1, the polling (default) each request individually assigned chronologically to different back-end server, if the back-end server is down, can be automatically removed. 

upstream backserver {
    server 192.168.0.14;
    server 192.168.0.15;
}

 

2, weight 
polling a probability proportional to the weight ratio of access and, for back-end server uneven performance 
situation.

upstream backserver {
    server 192.168.0.14 weight=3;
    server 192.168.0.15 weight=7;
}

 

The higher the weight, the greater the probability of being accessed in the above example, 30%, 70%, respectively.

 

3, there is a problem that is described above, when the load-balancing system, if the user is logged on a server, the user request a second time, because we are load-balancing system, every request will be relocated to one server in the cluster, then the server has logged one user and then relocated to another server, their login information will be lost, this is clearly inappropriate.

We can use ip_hash instructions to solve this problem, if the customer has visited a server, when the user accesses again, the request is through a hashing algorithm, to automatically locate the server.

Each request is assigned according ip hash result of the visit, so that each visitor to access a fixed back-end server, can solve the problem of session.

upstream backserver {
    ip_hash;
    server 192.168.0.14:88;
    server 192.168.0.15:80;
}

 

4, fair (third party) 
by the response time of the allocation request to the backend server, a short response time priority allocation.

upstream backserver {
    server server1;
    server server2;
    fair;
}

 

5, url_hash (third party) 
by hash results to access url allocation request, each url directed to the same back-end server, the back end server is effective when the cache.

upstream backserver {
    server squid1:3128;
    server squid2:3128;
    hash $request_uri;
    hash_method crc32;
}

 

The status of each device is set to:

1.down representation before being does not participate in a single server load 2.weight default 1.weight greater, the greater the weight load of weight. 3 : allowing the number of failed requests defaults to 1. When exceeds the maximum number, returns the module defined error 4. After failed, pause time. 5.backup: all other backup machine down or non-busy times, requests backup machine. So this machine will be the lightest pressure. 
 
max_failsproxy_next_upstream 
fail_timeout:max_fails 

Instance:

#user  nobody;
worker_processes  4;
events {
    # 最大并发数
    worker_connections  1024;
}
http{
    # 待选服务器列表
    upstream myproject{
        # ip_hash指令,将同一用户引入同一服务器。
        ip_hash;
        server 125.219.42.4 fail_timeout=60s;
        server 172.31.2.183;
        }

    server{
                # 监听端口
                listen 80;
                # 根目录下
                location / {
                    # 选择哪个服务器列表
                    proxy_pass http://myproject;
                }

            }
}

Guess you like

Origin www.cnblogs.com/kinwing/p/11130281.html