How does Nginx achieve load balancing gracefully! (Recommended collection)

Preface

This article mainly introduces how Nginx implements load balancing.

Introduction to Load Balancing

Before introducing the implementation of Nginx load balancing, let’s briefly talk about the classification of load balancing, which is mainly divided into hardware load balancing and software load balancing . Hardware load balancing is a combination of special software and hardware equipment. The equipment manufacturer provides a complete and mature The solution, such as F5, is very reliable in terms of data stability and security, but it will be more expensive than software; software load balancing is mainly based on software such as Nginx, which implements a message queue distribution mechanism.

Simply put, the so-called load balancing is to split many requests and distribute them to different servers for processing. For example, I have 3 servers, namely A, B, and C, and then use Nginx for load balancing and a polling strategy. If 9 requests are received at this time, the 9 requests will be evenly distributed to A and B. , Cf server, each server handles 3 requests, so we can use the characteristics of multiple machine clusters to reduce the pressure on a single server.

An example diagram of Nginx's implementation of load balancing:
Insert picture description here

Load balancing strategy

NGINX open source supports four load balancing methods, and NGINX Plus adds two more methods.

1. Round Robin: poll all requests to send requests, the default allocation method.

nginx.conf configuration example:

upstream xuwujing {
   server www.panchengming.com;
   server www.panchengming2.com;
}

Note: The domain name above can also be replaced by IP.

2. Least Connections: Send the request to the server with the least number of active connections, and also consider the server weight.

nginx.conf configuration example:

upstream xuwujing {
    least_conn;
    server www.panchengming.com;
    server www.panchengming2.com;
}

3. IP Hash: The server that sends the request is determined by the client's IP address. In this case, the first three bytes of the IPv4 address or the entire IPv6 address are used to calculate the hash value. This method guarantees that requests from the same address arrive at the same server, unless the server is unavailable.

upstream xuwujing {
     ip_hash;
     server www.panchengming.com;
     server www.panchengming2.com;
}

4. Generic Hash: The server to which the request is sent is determined by a user-defined key, which can be a text string, variable or combination.

 upstream xuwujing {
     hash $request_uri consistent;
     server www.panchengming.com;
        server www.panchengming2.com;
 }

5. Least Time (NGINX Plus only) – For each request, NGINX Plus selects the server with the lowest average latency and the lowest number of active connections, where the lowest average latency is
calculated based on the following parameters including the least_time command:

  • header: The time when the first byte was received from the server.
  • last_byte: The time to receive a complete response from the server.
  • last_byte inflight: The time to receive the complete response from the server.

upstream xuwujing {least_time header;server www.panchengming.com;server www.panchengming2.com;}

6. Random: Each request will be delivered to a randomly selected server. If two parameters are specified, first, NGINX randomly selects two servers based on the server weight, and then uses the specified method to select one of them.

  • least_conn: the minimum number of active connections
  • least_time=header (NGINX Plus): The shortest average time to receive the response header from the server ($upstream_header_time).
  • least_time=last_byte (NGINX Plus): The shortest average time to receive a complete response from the server ($upstream_response_time).
  • upstream xuwujing {random two least_time=last_byte;server www.panchengming.com;server www.panchengming2.com;}

Nginx+SpringBoot achieves load balancing

Environmental preparation

  • Depends on the version above JDK1.8;
  • Depend on Nginx environment;

The project here uses my previous springboot project, SpringBoot's project address: https://github.com/xuwujing/springBoot-study/tree/master/springboot-thymeleaf

First, we download the project, enter: mvn clean package to package the project into a jar file, then put application.properties and this jar project in a folder, and then copy the folder (here for clarity, we copy it, but it’s not Copy and change the port to restart), modify the port of the copy folder application.properties, for example, change it to 8086.

Nginx configuration

We find the nginx configuration file nginx.conf, which is located in the nginx/conf/nginx.conf directory, and then we modify the configuration and add the following configuration:

upstream pancm{
   server 127.0.0.1:8085;
   server 127.0.0.1:8086;
}
  • upstream pancm: define a name, whatever you want;
  • server + ip: port or domain name;

If you don't want to use the Round Robin strategy, you can also switch to another one.

Then add/modify the following configuration on the server:

 server {
        listen       80;
        server_name  127.0.0.1;


        location / {
            root   html;
            proxy_pass http://pancm;
            proxy_connect_timeout 3s;
            proxy_read_timeout 5s;
            proxy_send_timeout 3s; 
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

Configuration instructions:

  • server: The name of the virtual host, multiple servers can be configured in one http;
  • listen: Nginx default port;
  • server_name: The address of the Nginx service. Domain names can be used, and multiple spaces are separated.
  • proxy_pass: proxy path, generally configure the name behind upstream to achieve load balancing, you can directly configure ip to jump;

nginx.conf complete configuration:

events {
    worker_connections  1024;
}

error_log nginx-error.log info;
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

      upstream pancm{
       server 127.0.0.1:8085;
       server 127.0.0.1:8086;
    }
    
    server {
        listen       80;
        server_name  127.0.0.1;


        location / {
            root   html;
            proxy_pass http://pancm;
            proxy_connect_timeout 3s;
            proxy_read_timeout 5s;
            proxy_send_timeout 3s; 
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

Load balancing test

After completing the Nginx configuration, we start Nginx. Linux input /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf, if it has been started, you can use the /usr/local/nginx/sbin/nginx -s reload command for hot reload configuration File, Windows directly click nginx.exe in the Nginx directory or cmd to run start nginx to start, if it is started, you can still use nginx -s reload for hot loading.

After the Nginx startup is complete, we start the springboot that we just downloaded and copy the project that changes the port in turn, and enter: java -jar springboot-jsp-thymeleaf.jar to start.

After the startup is successful, we can access the service by entering the ip of the service in the browser.

Sample image:
Insert picture description here
Note: Here I use windows system for testing, the actual linux is also the same.

Then we proceed and check the console log!
Insert picture description here
From the above example figure, we made 4 interface refresh requests, and finally they were evenly distributed to the two services. From the above test results, we achieved load balancing.

Here I'm talking about the precautions for using Nginx. When learning and testing, there is generally no problem with using the default port of nginx to achieve load balancing, but when we use it in a project, there is a login interface and the port is not At 80, there will be a login interface that cannot be redirected. If you are debugging, an error such as net::ERR_NAME_NOT_RESOLVED will appear. The reason for this is because the default port of nginx is 80, so the default jump is also this, so this appears. In this case, you need to add proxy_set_header Host $host:port configuration under location, and the port and listen port should be the same.

Reader benefits

Thank you for seeing here!
I have compiled a lot of 2021 latest Java interview questions (including answers) and Java study notes here, as shown below
Insert picture description here

The answers to the above interview questions are organized into document notes. As well as interviews also compiled some information on some of the manufacturers & interview Zhenti latest 2021 collection (both documenting a small portion of the screenshot) free for everyone to share, in need can click to enter signal: CSDN! Free to share~

If you like this article, please forward it and like it.

Remember to follow me!

Guess you like

Origin blog.csdn.net/weixin_49527334/article/details/113472215