nginx---consistent hash reverse proxy and six scheduling algorithms for upstream


nginx-fair (third party): Allocate requests according to the response time of the back-end server, and give priority to those with short response time.
   upstream web_pool {
             server 172.23.136.148;
             server 172.23.136.149;
             fair;
            }
Description: Indicates that the load of the two servers takes priority over the request forwarded by the upstream with the fastest response speed, which plays a role in the fast load path, and logically realizes the The effect of the load near the area.


Nginx's reverse proxy:
Reverse proxy refers to a proxy server to accept connection requests on the Internet, then forward the request to a server on the internal network, and return the result obtained from the server to the Internet to request a connection to the client, At this time, the proxy server appears as a server to the outside world, and this working mode is similar to the LVS-NET model. haproxy also has such a proxy function, but it does not have a large-scale caching function.
Reverse proxy can also be understood as web server acceleration, which is a technique used to reduce the load of the actual web server by adding a high-speed web cache server between a busy web server and an external network. The reverse proxy is to improve the acceleration function for the web server. All requests from the external network to access the server must pass through it. In this way, the reverse proxy server is responsible for receiving the client's request, then obtains the content from the source server, and returns the content to the user. , and save the content locally, so that when the same information request is received in the future, it will directly send the content in the local cache to the user, which has reduced the pressure on the back-end web server and improved the response speed. So Nginx also has a caching function. The workflow of the reverse proxy: 1) The user sends an access request through a domain name, and the domain name is resolved to the IP address of the reverse proxy server; 2) The reverse proxy server receives the user's request; 3) The reverse proxy server searches in the local cache Whether there is the content requested by the current user, if found, it will directly return the content to the user; 4) If there is no content requested by the user locally, the reverse proxy server will request the same information content from the back-end server as its own, and send the information The content is sent to the user. If the information content can be cached, the content will be cached in the local cache of the proxy server. The advantages of reverse proxy: 1) It solves the problem that the website server is visible to the outside world, and improves the security of the website server; 2) It saves limited IP address resources, and the backend server can use the private IP address to communicate with the proxy server;
 





 



3) Accelerates the access speed of the website and reduces the load of the real web server.

(1) Scheduling algorithm
The upstream instruction of Nginx is used to specify the back-end server used by proxy_pass and fastcgi_pass, that is, the reverse proxy function of nginx, so the two can be used in combination to achieve the purpose of load balancing, and Nginx also supports Various scheduling algorithms:
1. Polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, the server will be skipped and allocated to the next monitored server. And it does not need to record the state of all current connections, so it is a stateless scheduling.
2. The weight
designation adds weight on the basis of polling. The weight is proportional to the access ratio, that is, it is used to indicate the performance of the back-end server. If the performance of the back-end server is good, most requests can be allocated to it. has achieved what it is capable of.
For example: My backend server 172.23.136.148
configuration: E5520*2 CPU, 8G memory 172.23.136.148 for processing, the remaining 10 requests to 172.23.136.149 for processing, you can do the following configuration upstream web_poll { server 172.23.136.148 weight=10; server 172.23.136.149 weight=5; } 3. ip_hash 







每个请求按访问ip的hash结果分配,当新的请求到达时,先将其客户端IP通过哈希算法进行哈希出一个值,在随后的请求客户端IP的哈希值只要相同,就会被分配至同一个后端服务器,该调度算法可以解决session的问题,但有时会导致分配不均即无法保证负载均衡。
例如:
upstream web_pool {
ip_hash;
server 172.23.136.148:80;
server 172.23.136.149:80;
}
4、fair(第三方)
按后端服务器的响应时间来分配请求,响应时间短的优先分配。
upstream web_pool {
server 172.23.136.148;
server 172.23.136.149;
fair;
}
5、url_hash(第三方)
按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。
例:在upstream中加入hash语句,server语句中不能写入weight等其他的参数,hash_method是使用的hash算法
upstream web_pool {
server squid1:3128;
server squid2:3128;
hash $request_uri;
hash_method crc32;
}
6.tengine增加的第六种方法
:::tengine新增的服务器加速调度算法:::


  • 一致性hash模块
描述
  • 这个模块提供一致性hash作为负载均衡算法。

  • 该模块通过使用客户端信息(如:$ip, $uri, $args等变量)作为参数,使用一致性hash算法将客户端映射到后端机器;

  • 如果后端机器宕机,这请求会被迁移到其他机器;

  • server id 字段,如果配置id字段,则使用id字段作为server标识,否则使用server ip和端口作为server标识,

    使用id字段可以手动设置server的标识,比如一台机器的ip或者端口变化,id仍然可以表示这台机器。使用id字段

    可以减低增减服务器时hash的波动;

  • server weight 字段,作为server权重,对应虚拟节点数目;

  • 具体算法,将每个server虚拟成n个节点,均匀分布到hash环上,每次请求,根据配置的参数计算出一个hash值,在hash环上查找离这个hash最近的虚拟节点,对应的server作为该次请求的后端机器;

  • 该模块可以根据配置参数采取不同的方式将请求均匀映射到后端机器,比如:

    consistent_hash $remote_addr:可以根据客户端ip映射;

    consistent_hash $request_uri: 根据客户端请求的uri映射;

    consistent_hash $args:根据客户端携带的参数进行映射;

例子:
worker_processes  1;

http {
    upstream test {
        consistent_hash $request_uri;

        server 127.0.0.1:9001 id=1001 weight=3;
        server 127.0.0.1:9002 id=1002 weight=10;
        server 127.0.0.1:9003 id=1003 weight=20;
    }
}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326061210&siteId=291194637