[Nginx] One of the series load balancing

Table of contents

1. Overview of Nginx

1.1 Overview of load balancing

1.2 The role of load balancing

1.3 Layer 4/7 load balancing

1.3.1 Introduction to network models

1.3.2 Comparison of Layer 4 and Layer 7 Load Balancing

1.3.3 Nginx seven-layer load balancing implementation

1.4 Nginx load balancing configuration

1.5 Nginx load balancing status

1.6 Nginx load balancing strategy

2. Actual load balancing

2.1 Test server

2.2 Normal polling

2.2.1 Achievement effect

2.2.2 Preparations

2.2.3 Implementation

2.3 weight weighting (weighted round robin)

2.3.1 Achievement effect

2.3.2 Preparations

2.3.3 Implementation

2.4 ip_hash

2.5 url_hash

2.6 fair

3. Alibaba Cloud traditional load balancing CLB

3.1 Overview

3.2CLB Composition

3.3 Product Advantages

3.4 Alibaba Cloud console configuration SLB


Getting Started Installation of One of the Nginx Series

Reverse proxy, one of the Nginx series


1. Overview of Nginx


With the rapid development of society, the vigorous development of informatization and digitalization, the number of users accessing services has also increased sharply. Relying on single-device hardware is increasingly unable to meet a large number of network requests under high concurrency. Therefore, load balancing (LB ) application born.

1.1 Overview of load balancing


The so-called load balancing means that Nginx distributes requests to upstream application servers, so that even if a certain server goes down, it will not affect the processing of requests, or when the application server cannot handle it, it can expand at any time.

Application cluster: Deploy the same application to multiple machines to form a processing cluster, receive requests distributed by load balancing devices, process them and return response data.

Load balancer : Distributes user access requests to a server in the cluster for processing according to the corresponding load balancing algorithm.

1.2 The role of load balancing


1. Solve the high concurrency pressure of the server and improve the processing performance of the application.

2. Provide failover to achieve high service availability and reliability.

3. Enhance the scalability of the website by increasing or reducing the number of servers.

4. Filtering on the load balancer can improve the security of the system.

1.3 Layer 4/7 load balancing


1.3.1 Introduction to network models


OSI (Open System Interconnection, Open System Interconnection Model) is a network architecture specified by the International Organization for Standardization ISO that is not based on specific models, operating systems, or companies. The model divides the work of network communication into seven layers.

Load balancing is mainly divided into four-layer and seven-layer load balancing, corresponding to the fourth and seventh layers of the OSI seven-layer model .

Four-layer load balancing works on the fourth layer of the OSI model - the transport layer . Since there is only TCP/UDP protocol in the transport layer, these two protocols include source port number and destination port number in addition to source IP and destination IP. .

After receiving the client request, the four-layer load balancing server forwards the traffic to the application server by modifying the address information (IP+port number) of the data packet.

The seven-layer load balancing works in the application layer of the OSI model , and there are many application layer protocols, such as http, radius, dns, etc. are commonly used. Layer-7 loads can be loaded based on these protocols. These application layer protocols will contain a lot of meaningful content. For example, in the load balancing of the same web server, in addition to the load based on IP plus port, it can also decide whether to perform load balancing based on the seven-layer URL, browser type, and language.

1.3.2 Load Comparison Between Layer 4 and Layer 7


(1) Intelligence

Since the seven-layer load balancing has all the functions of the seven-layer OIS, it can be more flexible in dealing with user needs. In theory, the seven-layer model can modify all requests from users and servers. For example, add information to the file header, and classify and forward according to different file types. The four-tier model only supports demand forwarding based on the network layer, and cannot modify the content of user requests.

(2) Security

Since the seven-layer load balancing has all the functions of the OSI model, it is easier to resist attacks from the network; in principle, the four-layer model will directly forward the user's request to the back-end node, and cannot directly resist network attacks.

(3) Complexity

The four-layer model generally has a relatively simple architecture, which is easy to manage and locate problems; the seven-layer model has a more complex architecture, and usually needs to be considered in combination with the mixed use of the four-layer model, making it more complicated to locate problems.

(4) Efficiency ratio

The four-layer model is based on a lower-level setting, which is usually more efficient, but its application range is limited; the seven-layer model requires more resource consumption, and in theory has stronger functions than the four-layer model, and the current implementation is more based on http application.

The method adopted in the actual environment: four-layer load (LVS) + seven-layer load (Nginx).


1.3.3 Nginx seven-layer load balancing implementation


Nginx needs to use the proxy_pass proxy module configuration to achieve seven-layer load balancing. Nginx's load balancing is based on Nginx's reverse proxy to distribute user requests to a group of [upstream virtual service pools] according to the specified algorithm.

1.4 Nginx load balancing configuration


nginx.conf

   upstream myserver{
          server 192.168.2.211:8082 max_fails=1 fail_timeout=10s weight=1;
          server 192.168.2.211:8083;
    }
    server {
        listen       9001;
        server_name  www.kangll.com;

        location /edu/ {
            proxy_pass http://myserver;
            root  html;
        }
    }

1.5 Nginx load balancing status


The states of the proxy server in the load balancing scheduling are as follows:

state

overview

down

The current server does not participate in load balancing temporarily

backup

Flagged as a backup server, when the master server is down, requests are sent to the flagged server

max_fails

The number of allowed request failures, within the time set by the fail_timeout parameter, if all requests to the server fail within this time, the server is considered to be down

fail_timeout

Service pause time after max_fails failures

max_conns

Limit the maximum number of receiving connections. The default is 0, which means no limit. This configuration can be set according to the concurrent amount of requests processed by the backend server to prevent the backend server from being overwhelmed.

1.6 Nginx load balancing strategy


Strategy

overview

polling

Each request is allocated to different backend servers one by one in chronological order, and if the backend server hangs up, it will be automatically deleted

Weights

Specify the polling frequency, the weight is proportional to the access rate, and the performance of the user's back-end server is uneven, as mentioned above

Adding weight=1 means the weight is 1, without adding weight, the default is 1

ip_hash

Each request is assigned according to the hash result of the IP, so that each access user accesses a fixed backend server, which can be solved

Session sharing problem.

fair

Allocate requests according to the response time of the backend server, and assign priority to those with short response times

url_hash

Assign requests according to the hash of the access URL, so that each URL is directed to the same backend server


2. Actual load balancing


2.1 Test server


IP

components

port

192.168.2.211

Tomcat

8080

192.168.2.154

Nginx

80

2.2 Normal polling


2.2.1 Achievement effect

Visit kangll.com:9001/edu/a.html in the browser, and observe the implementation effect of request load balancing. Requests are evenly distributed to the two ports of nodes 192.168.2.211:8082 and 192.168.2.211:8083.

2.2.2 Preparations

Create a new edu folder and a.html file under the webapps folder of 8083 Tomcat in the webapps directory of the two tomcats, fill in the content as "hello, 8083-Tomcat!", corresponding to the Tomcat a.html file of 8082, fill in the content as " hello, 8082-Tomcat!"

2.2.3 Implementation

nginx.conf configuration

   upstream myserver{
          server 192.168.2.211:8082;
          server 192.168.2.211:8083;
    }
    server {
        listen       80;
        server_name  www.kangll.com;

        location / {
            proxy_pass http://192.168.2.211:8080;
            index  index.html index.htm index.jsp;
        }
    }
    server {
        listen       9001;
        server_name  www.kangll.com;

        location /edu/ {
            proxy_pass http://myserver;
            root  html;
        }
    }

Access through a browser : kangll.com:9001/edu/a.html, the results of the two refresh page requests are different.

2.3 weight weighting (weighted round robin)


        weight=number: used to set the weight of the server, the default is 1, the greater the weight number, the greater the probability of being assigned to the request. The weight value is mainly adjusted for different back-end server hardware configurations in the actual working environment, so this strategy is more suitable for situations where the server hardware configurations are quite different.

2.3.1 Achievement effect

Visit kangll.com:9001/edu/a.html in the browser, and observe the implementation effect of request load balancing. Requests are evenly distributed to the two ports of nodes 192.168.2.211:8082 and 192.168.2.211:8083.

2.3.2 Preparations

Prepare a new edu folder and a.html file under the webapps folder of 8083 Tomcat in the webapps directory of three tomcats, fill in the content as "hello, 8083-Tomcat!", corresponding to the Tomcat a.html file of 8082, fill in the content as "hello, 8082-Tomcat!", corresponding to the Tomcat a.html file of 8081, fill in the content as "hello, 8081-Tomcat!"

2.3.3 Implementation

Configuration file nginx.conf

http {
   ...
    upstream myserver{
          server 192.168.2.211:8081 weight=10;
          server 192.168.2.211:8082 weight=5;
          server 192.168.2.211:8083 weight=5;
    }
 
    server {
        listen      9888;
        server_name www.kangll.com;

        location ~ / {
           # 被代理服务器的地址, 可以配置主机、ip 或者地址加端口
            proxy_pass http://myserver;
            index a.html;
            proxy_set_header Host $host;
            proxy_set_header X-real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

}

Access via browser: http://www.kangll.com:9888/edu/a.html The page is refreshed 10 times, and the ratio of visits is: 4:3:3.

2.4 ip_hash


       When performing load balancing on multiple dynamic application servers at the backend, the ip_hash command can locate a client IP request to the same backend server through a hash algorithm. When a user from a certain IP logs in on the back-end web server A, and then accesses other URLs of the site, it can be guaranteed that the user accesses the back-end web server A, which can solve the session sharing problem.

Typical example : when a user accesses a system for the first time, login authentication is required. First, the request is redirected to the Tomcat1 server for processing. The login information is stored on Tomcat1. If other operations are performed at this time, the request may be polled. On the second Tomcat2, since Tomcat2 does not save the session information, it will think that the user has not logged in, and then continue to log in once. If there are multiple servers, you must log in every time you visit for the first time, which obviously affects the user experience. .

      Based on IP routing load, nginx distributes requests sent from the same IP address to the same Tomcat server every time, so there will be no problem of session sharing.

Configuration file nginx.conf

http {
   ...
    upstream myserver{
          ip_hash;
          server 192.168.2.211:8081;
          server 192.168.2.211:8082;
          server 192.168.2.211:8083;
    }
 
    server {
        listen      9888;
        server_name www.kangll.com;

        location ~ / {
           # 被代理服务器的地址, 可以配置主机、ip 或者地址加端口
            proxy_pass http://myserver;
            index a.html;
            proxy_set_header Host $host;
            proxy_set_header X-real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

}

Note: Using the ip_hash command cannot guarantee the load balance of the back-end servers, which may cause some back-end servers to receive more requests, and some back-end servers to accept fewer requests, and methods such as setting back-end server weights will not work.

2.5 url_hash


Allocate requests according to the hash result of the accessed url, so that each url is directed to the same backend server, and it should be used in conjunction with cache hits. Multiple requests for the same resource may arrive at different servers, resulting in unnecessary multiple downloads, low cache hit rate, and waste of some resource time. Using url_hash can make the same url (that is, the same resource request) reach the same server. Once the resource is cached and the request is received again, it can be read from the cache.

nginx.conf

upstream myserver{
          hash $request_uri;
          server 192.168.2.211:8081;
          server 192.168.2.211:8082;
          server 192.168.2.211:8083;
    }
 
    server {
        listen      9888;
        server_name www.kangll.com;

        location ~ / {
           # 被代理服务器的地址, 可以配置主机、ip 或者地址加端口
            proxy_pass http://myserver;
            index a.html;
            proxy_set_header Host $host;
            proxy_set_header X-real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

2.6 fair


Fair does not use the balancing algorithm used by the built-in load balancing, but can intelligently perform load balancing according to the page size and loading time.

nginx.conf

upstream myserver{
          fair;
          server 192.168.2.211:8081;
          server 192.168.2.211:8082;
          server 192.168.2.211:8083;
    }
 
    server {
        listen      9888;
        server_name www.kangll.com;

        location ~ / {
           # 被代理服务器的地址, 可以配置主机、ip 或者地址加端口
            proxy_pass http://myserver;
            index a.html;
            proxy_set_header Host $host;
            proxy_set_header X-real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

3. Alibaba Cloud traditional load balancing CLB


Classic Load Balancer (CLB) is a traffic distribution control service that distributes access traffic to multiple back-end cloud servers according to forwarding policies. CLB expands the service capability of the application and enhances the usability of the application. What is Traditional Load Balancing CLB - Load Balancing - Alibaba Cloud Documentation Center

3.1 Overview


CLB virtualizes multiple added cloud servers in the same region into a high-performance and highly available back-end service pool by setting virtual service addresses, and distributes requests from clients to cloud servers in the back-end server pool according to forwarding rules. server.

By default, CLB checks the health status of the cloud servers in the cloud server pool, automatically isolates the cloud servers in an abnormal state, eliminates the single point of failure of a single cloud server, and improves the overall service capability of the application. In addition, CLB also has the ability to resist DDoS attacks, which enhances the protection ability of application services.

3.2 CLB Composition


Note The gray cloud server in the figure above means that the health check of the cloud server fails, and the traffic will not be forwarded to the cloud server.

CLB consists of the following three parts:

composition

illustrate

example

A CLB instance is a running load balancing service that receives traffic and distributes it to backend servers. To use the load balancing service, you must create a CLB instance and add at least one listener and two cloud servers.

monitor

Listeners are used to check client requests and forward requests to backend servers. Listening will also perform health checks on backend servers.

backend server

The backend server is a group of cloud servers that receive frontend requests. Currently, CLB supports adding cloud server ECS (Elastic Compute Service), elastic container instance ECI (Elastic Container Instance) and elastic network card ENI (Elastic Network Interface) as backend servers. You can add cloud servers to the backend server pool individually, or add and manage them in batches through virtual server groups or active and standby server groups. For more information, see:

3.3 Product Advantages


  • High availability adopts a fully redundant design, no single point, and supports disaster recovery in the same city. Elastic expansion is performed according to the traffic load, and external services are not interrupted in the case of traffic fluctuations.
  • Scalable You can increase or decrease the number of back-end servers at any time according to business needs, and expand the service capability of the application.
  • Low cost Compared with the high investment of traditional hardware load balancing system, the cost can be reduced by 60%.
  • Combined with Cloud Shield, it can provide 5 Gbps anti-DDoS attack capability.
  • High-concurrency clusters support hundreds of millions of concurrent connections, and a single instance supports a maximum of 1 million concurrent connections.

3.4 Alibaba Cloud console configuration SLB


The service address of the created SLB instance is our public IP

Server group is monitored

You can see that we gave the same weight of 100 to the first and third servers

Summary: Four Layers and Seven Layers of Load Balancing_Four Layers Load Balancing_Xiaowei's Blog Blog-CSDN Blog

Seven-layer and four-layer load balancing of nginx_nginx seven-layer and four-layer_Xiaoyuer&'s Blog

Nginx——Nginx load balancing

Guess you like

Origin blog.csdn.net/qq_35995514/article/details/131882766