Interpretation of Nginx configuration files and 4 common ways to achieve load balancing

     

Table of contents

forward proxy

reverse proxy

Summarize:

Nginx configuration file

Components of nginx

Part 1: Global Blocks

The second part: the events block

Part 3: http block

http global block

server block

Nginx commonly used 4 ways to achieve load balancing

Round Robin:

IP hash (IP Hash):

Weighted Round Robin:

Least Connections:


        Premise: First understand the theoretical knowledge of forward proxy and reverse proxy, and then directly interpret the nginx configuration file and the four ways to achieve load balancing.

Nginx is a powerful open source web server and reverse proxy server, which supports forward proxy and reverse proxy functions. Here is a brief explanation of both:

forward proxy

        A forward proxy is a proxy server that acts as a middleman between the client and the destination server. When the client requests to access the target server, the request is first sent to the forward proxy server, and then the proxy server forwards the request to the target server, and returns the response of the target server to the client. The forward proxy hides the real IP address of the client , making it impossible for the target server to directly identify and track the client.

The main uses of forward proxies include:

  • Access to restricted resources: When certain resources are subject to network restrictions or access restrictions, forward proxies can be used to bypass these restrictions to obtain resources.
  • Improve access speed: Proxy servers can cache frequently requested resources, thereby increasing the speed of client access to resources.
  • Break through the firewall: Forward proxy can help bypass the restrictions of corporate or national firewalls and access blocked websites or resources.

reverse proxy

        A reverse proxy is a proxy server that acts as a middleman between the server side and the client side. When the client sends a request to access the reverse proxy server, the proxy server will forward the request to one of the back-end servers according to certain rules, and then return the response of the back-end server to the client. The reverse proxy hides the real server . For the client, they don't know which backend server they are accessing.

The main uses of a reverse proxy include:

  • Load balancing: The reverse proxy can evenly distribute requests to multiple back-end servers according to a certain algorithm, so as to achieve load balancing and improve the concurrent processing capability and stability of the system.
  • Cache static resources: The reverse proxy can cache frequently requested static resources, reduce the load on the back-end server, and improve the access speed of the website.
  • Security and reliability: The reverse proxy can be used as a firewall and security device, providing functions such as security authentication, access control, and DDoS attack protection.

Summarize:

  • Both forward proxy and reverse proxy use a proxy server as an intermediary to forward requests and responses.
  • Forward proxy means that the client sends the request through the proxy server, and the proxy server helps the client send the request to the target server on the Internet . Hide the real IP address of the client (the target server does not know which real client is visiting).
  • A reverse proxy is when a client sends a request through a proxy server, and the proxy server forwards the request to one of multiple servers on the backend . The reverse proxy hides the real server (the client does not know the result of the backend server's response) .
  • The difference between the two lies in the flow of requests and the relationship between the proxy server and the target server. In the forward proxy, the proxy server and the client are on the same side, and in the reverse proxy, the proxy server and the target server are on the same side.

Nginx configuration file

Components of nginx

        There are a lot of # in the configuration file, the ones at the beginning indicate the comment content, we remove all the paragraphs beginning with #, the simplified content is as follows:

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       8080;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

The nginx configuration file consists of three parts


Part 1: Global Blocks

For example, the configuration in the first line above:

  worker_processes  1;

        This is the key configuration of the concurrent processing service of the Nginx server. The larger the value of worker_processes, the more concurrent processing it can support , but it will be restricted by hardware, software and other devices.

The second part: the events block

For example, the above configuration:

events {
    worker_connections  1024;
}

        The instructions involved in the events block mainly affect the network connection between the Nginx server and the user . Commonly used settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which event-driven model to choose for processing Connection requests, the maximum number of network connections that each word process can support at the same time, etc.
The above example shows that the maximum number of connections supported by each work process is 1024. This
part of the configuration has a great impact on the performance of Nginx, and should be flexibly configured in practice.

Part 3: http block
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       8080;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

It should be noted that: http block can also include http global block and server block.

http global block

        The directives of http global block configuration include file import, MIME-TYPE definition, log customization, connection timeout, maximum number of single link requests, etc.

server block

         This is closely related to the virtual host. From the perspective of the user, the virtual host is exactly the same as an independent hardware host. This technology was created to save the cost of Internet server hardware.
Each http block can include multiple server blocks, and each server block is equivalent to a virtual host .
Each server block is also divided into global server blocks, and can contain multiple locaton blocks at the same time.

Nginx commonly used 4 ways to achieve load balancing

Nginx provides a variety of ways to achieve load balancing, the following are some of the commonly used ways:

Round Robin:

        This is the default load balancing algorithm . Nginx distributes requests to backend servers in sequence. Each server handles requests according to its weight, and then round-robin distribution in order. This algorithm simply and evenly distributes the load to the back-end servers, and is suitable for scenarios where the back-end servers have the same configuration and comparable processing capabilities.

http {
    upstream backend {
        server 192.168.1.101:8080;
        server 192.168.1.102:8080;
        server 192.168.1.103:8080;
    }

    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
    }
}
IP hash (IP Hash):

        Nginx performs a hash operation based on the client's IP address, and assigns the request to a fixed backend server according to the calculation result. This algorithm ensures that the same client IP will be assigned to the same server every request, which is suitable for applications that need to maintain session state.

http {
    upstream backend {
        ip_hash;
        server 192.168.1.101:8080;
        server 192.168.1.102:8080;
        server 192.168.1.103:8080;
    }

    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
    }
}
Weighted Round Robin:

        Nginx distributes requests to servers based on each backend server's configured weights. A server with a higher weight will handle more requests. This method is suitable for situations where the backend servers have different configurations and different processing capabilities.

http {
    upstream backend {
        server 192.168.1.101:8080 weight=3;
        server 192.168.1.102:8080 weight=2;
        server 192.168.1.103:8080 weight=1;
    }

    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
    }
}
 Least Connections:

        Nginx will count the current number of active connections of each backend server, and distribute the request to the server with the least number of active connections to achieve load balancing. This algorithm is suitable for scenarios where backend server configurations and processing capabilities are different, and connection durations are unbalanced.

http {
    upstream backend {
        least_conn;
        server 192.168.1.101:8080;
        server 192.168.1.102:8080;
        server 192.168.1.103:8080;
    }

    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
    }
}

Guess you like

Origin blog.csdn.net/m0_64210833/article/details/131954254