How to configure http server and load balancing (reverse proxy) in Nginx

This article mainly introduces how to configure http server and load balancing (reverse proxy) in Nginx.

1. About Nginx

Nginx is an open source, high-performance, stable, simple, feature-rich HTTP and reverse proxy server that can also be used as an IMAP/POP3/SMTP proxy server. It uses an asynchronous event-driven architecture and can support high concurrent connections.
The main functions of Nginx include:

  • HTTP server: Nginx can be used as a web server to provide HTTP services. It supports static file serving, SSL and TLS protocols, virtual hosting and other functions.
  • Reverse proxy and load balancing: Nginx can act as a reverse proxy server, proxying HTTP or non-HTTP services. At the same time, it also provides a load balancing function that can distribute requests to multiple servers on the backend.
  • Mail proxy server: Nginx can also serve as an IMAP/POP3/SMTP proxy server.
  • TCP/UDP proxy server: Nginx can proxy TCP and UDP services.
    The design goal of Nginx is to provide high-performance, high-concurrency, and low-memory usage network services. It is widely used on the server side of websites and is one of the most popular web servers currently.

Software similar to Nginx mainly includes the following types:

  • Apache HTTP Server: This is one of the most popular web server software with powerful functions and numerous modules that can be configured to meet various needs.
  • Microsoft IIS: This is the web server software provided by Microsoft. It is highly integrated with Windows systems and supports Microsoft technologies such as .NET.
  • Lighttpd: This is a lightweight web server software that takes up less resources and has excellent performance. It is suitable for use in environments with limited resources.
  • Caddy: This is a new type of web server software with simple configuration and automatic support for HTTPS.
  • Tomcat: This is an open source project of Apache, mainly used to run Java code, and is often used as a Web server and Java application server.
  • Node.js: Although primarily a JavaScript runtime environment, it is often used to write web servers due to its event-driven and non-blocking I/O model.

All of the above software can be used as web servers, but each has different characteristics and advantages. You need to choose the appropriate software based on actual needs.

2. Configure http server

In Nginx, configuring the HTTP server is mainly by editing the Nginx configuration file, usually nginx.conf.

The following is a basic HTTP server configuration example:

http {
    server {
        listen 80;  # 监听80端口
        server_name example.com;  # 设置服务器名称

        location / {
            root /var/www/html;  # 设置网站根目录
            index index.html index.htm;  # 设置默认首页
        }

        # 处理错误页面
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root /var/www/html;
        }
    }
}

The above configuration indicates that Nginx will listen to port 80 and process example.comrequests from it.

When the requested URL path is /(that is, the root path of the website), Nginx will /var/www/htmlsearch index.htmlfor index.htmthe file in the directory and return it. If a 500, 502, 503 or 504 error occurs, Nginx returns /var/www/html/50x.htmlthe contents of the file.

The above is just a basic configuration example. In fact, the configuration of Nginx can be very complex, and advanced functions such as reverse proxy, load balancing, and URL rewriting can be configured. The specific configuration method needs to be carried out according to actual needs.

Next, let’s look at how to configure multiple http servers.

In Nginx, multiple HTTP servers can be configured by defining multiple server blocks in the configuration file.

Each server block represents a virtual host and can listen to different ports or handle different domain names.

Here is an example of configuring multiple HTTP servers:

http {
    # 第一个HTTP服务器
    server {
        listen 80;
        server_name example1.com;

        location / {
            root /var/www/example1;
            index index.html index.htm;
        }
    }

    # 第二个HTTP服务器
    server {
        listen 8080;
        server_name example2.com;

        location / {
            root /var/www/example2;
            index index.html index.htm;
        }
    }
}

The above configuration indicates that Nginx will listen to ports 80 and 8080 to process requests from example1.comand .example2.com

When the requested URL path /is , if the requested host name is example1.com, Nginx will /var/www/example1search index.htmlor index.htmfile in the directory and return;

If the requested host name is requested example2.com, Nginx will /var/www/example2search index.htmlfor index.htmthe file in the directory and return it.

3. Configure load balancing

Nginx supports the following load balancing methods:

  • Round Robin: This is the default load balancing method. Each request is assigned to a different backend server one by one in chronological order. If the backend server goes down, it can be automatically eliminated.
  • Weight: Different backend servers may have different machine configurations and current system loads, so Nginx allows you to specify the processing capabilities of each server. The higher the weight, the more requests are allocated.
  • IP Hash: The hash result of each requested IP is allocated, so that each visitor has fixed access to a back-end server, which can solve the session problem.
  • Least Connections: Prioritize allocation to the server with the smallest number of current connections, suitable for situations where request processing times vary greatly.
  • URL Hash: Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server, which can improve system efficiency when the back-end server generates a cache.

Among the above load balancing methods, polling, weight and IP Hash are built-in support of Nginx. Minimum connection and URL Hash require the use of Nginx third-party modules, such as ngx_http_upstream_least_conn_moduleand ngx_http_upstream_hash_module.

In Nginx, configuring load balancing is mainly achieved through the upstream module and proxy_pass directive. The following is an example of a basic load balancing configuration:

http {
    upstream backend {
        server backend1.example.com;
        server backend2.example.com;
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;  # 将请求转发到upstream定义的后端服务器
        }
    }
}

The above configuration indicates that Nginx will listen to port 80 and process requests from example.com.

When the requested URL path is / (that is, the root path of the website), Nginx will forward the request to backend1.example.comand in a round-robin manner backend2.example.com.

In addition to basic round-robin load balancing, Nginx also supports several other load balancing methods, such as weight, IP Hash, etc.

Configuring weight-based load balancing can be accomplished by adding the weight parameter to each server directive in the upstream module.
The following is an example weight-based load balancing configuration:

http {
    upstream backend {
        server backend1.example.com weight=3;
        server backend2.example.com;
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;  # 将请求转发到upstream定义的后端服务器
        }
    }
}

The above configuration indicates that Nginx will listen to port 80 and process example.comrequests from it.

When the requested URL path is /(that is, the root path of the website), Nginx will forward the request to the backend server defined by backend. These servers will be load balanced according to weight, with backend1.example.com的a weight of 3 and backend2.example.coma weight of 1 (the default value), so backend1.example.comthey will receive more requests.

Above, this article mainly introduces the basic methods of configuring http server and load balancing (reverse proxy) in Nginx.

Guess you like

Origin blog.csdn.net/lanyang123456/article/details/133500653