haproxy and nginx load balancing analysis

Out of doubts about the load balancing tools, the packet capture analysis was carried out on the load balancing of haproxy and nginx respectively, and the analysis process was shared with you. Let's talk about the conclusion of haproxy's packet capture first: after haproxy hangs on one of the load balancing backends, if the detection time point has not been reached, the request will be forwarded to the hanged one, and the request will be lost.

The experimental process of haproxy load balancing is as follows:

1: First look at the configuration of haproxy. Configure inter 20000 to detect once every 20s. This is to more clearly capture the load balancing detection mechanism of haproxy.

listen test9090
        bind 127.0.0.1:9090
        mode tcp
        server localhost90 127.0.0.1:90 check inter 20000
        server localhost91 127.0.0.1:91 check inter 20000

2: I use nginx to test the backend, and the configuration of nginx is as follows. You can build an index.html in /var/www/html/ for testing

server {
            listen       90;
            listen       91;
            location /{
                    root /var/www/html;
            }
    }

First test with curl 127.0.0.1:9090, and open two windows on the machine to see if the packet capture is normal. The commands run in the two windows are as follows: tcpdump -i lo -nn 'port 90' tcpdump -i lo -nn 'port 91' to see if the load balancing can work normally.

The screenshot of the packet capture above proves that both ports 90 and 91 monitored by nginx are listening. Using packet capture to detect is more detailed than looking at the log, so I still use packet capture to analyze.

3: Capture packets to view the health detection mechanism of haproxy

Because we have configured inter 20000, that is, tell haproxy20s to check once, and check the packet capture and check it in 20s. Note that this detection is performed when the client has no task request, that is, processing the request is separate from the detection.

4: Simulate an online failure, nginx hangs up on port 91

Remove the nginx configuration of listen 91, and then reload it, and you will find that if the front-end request is distributed to port 91, it will hang up. After capturing the packet, it is found that haproxy needs to detect three times to cut the fault. We are configured to detect every 20s, and it may take up to 60s to detect the fault to remove the fault. If there are 1w requests in these 60s, 5k will be lost. If it is used online, the detection mechanism will definitely not be once every 20s, generally it will be switched off at most 3s.

 

The configuration of nginx load balancing is as follows:

1: First look at the reverse proxy load balancing configuration of nginx, as follows:

upstream backend {
        server 127.0.0.1:90 weight=1 max_fails=3 fail_timeout=30;
        server 127.0.0.1:91 weight=5 max_fails=3 fail_timeout=30;
    } or send the configuration to ports 90 and 91 of the backend, and then Then simulate the occurrence of the failure.

server {
            listen 9090;
            location / {
                    proxy_pass http://backend;
            }
    } The front end still uses 9090 to listen and forwards requests to ports 90 and 91.

2: I still use nginx to test the backend, and the configuration of nginx is as follows. You can build an index.html in /var/www/html/ for testing

server {
            listen 90;
            listen 91;
            location /{
                    root /var/www/html;
            }
    } capturing packets will also happen to both 90 and 91 packets.

3: Capture packets to view the health detection mechanism of nginx reverse proxy load balancing

Capture the packet and you will find that nginx has no task requests on ports 90 and 91 when there are no requests. That is, when there is no request, the back-end proxy server will not be detected.

4: Simulate an online failure, nginx hangs up on port 91

Remove the nginx configuration of listen 91, and then reload it, and find that the front-end access has no task impact. The packet capture is as follows, the request has package 91, but because 91 does not request data. The balance of nginx will go to 90 again to fetch data. That is to say, if nginx hangs on port 91 on the backend, it will have no task impact on the front-end request, as long as the concurrency can be supported.

 

To sum up:

haproxy has been checking the health of the back-end server even if the request does not come. When it finds that there is a fault, it will cut off the request when the request does not arrive. However, if the request arrives during the detection period, there will be an exception. haproxy will only forward the request to one server in the backend.

Nginx does not always check the health of the back-end server. When the request comes, the distribution is still distributed, but when the data cannot be requested, it will make a request to the good machine until the request is normal. If the nginx request is unsuccessfully transferred to the backend, it will be transferred to another server. At the same time, I also tested Squid and found that Squid is very similar to nginx's reverse proxy load balancing.

 

So if haproxy is used as a front-end load balancing, if the back-end server needs to be maintained, it will definitely affect users in the case of high concurrency. However, if nginx is used as the front-end load balancer, as long as the concurrency can be supported, cutting off a few back-end units will not affect users. As for the performance of the two, we have to continue to study.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324387627&siteId=291194637