Nginx load balancing configuration

 

Load balancing is something that our high-traffic website needs to do. Next, I will introduce the load balancing configuration method on the Nginx server. I hope it will be helpful to students who need it.

 

load balancing

 

First, let’s briefly understand what load balancing is. If you understand it literally, you can explain that N servers share the load equally, and will not cause a server to be idle due to a high-load downtime. Then the premise of load balancing is to have multiple servers to achieve, that is, more than two.

 

test environment

 

Since there is no server, this test directly specifies the domain name of the host, and then installs three CentOS in VMware.

 

Test domain name: a.com

 

A server IP: 192.168.5.149 (main)

 

B server IP: 192.168.5.27

 

C server IP: 192.168.5.126

 

Deployment ideas

 

Server A serves as the main server, and the domain name is directly resolved to server A (192.168.5.149), and server A is load balanced to server B (192.168.5.27) and server C (192.168.5.126).

 

 

 

DNS

 

Since it is not a real environment, the domain name uses a.com for testing, so the resolution of a.com can only be set in the hosts file.

 

Open: C:WindowsSystem32driversetchosts

 

add at the end

 

192.168.5.149    a.com

 

Save and exit, then start the command mode and ping to see if the setting is successful

 

 

 

From the screenshot, a.com has been successfully resolved to 192.168.5.149IP

 

A server nginx.conf settings

 

Open nginx.conf, the file location is in the conf directory of the nginx installation directory.

 

Add the following code to the http section

 

upstream a.com { 

 

      server  192.168.5.126:80; 

 

      server  192.168.5.27:80; 

 

 

  

 

server{ 

 

    listen 80; 

 

    server_name a.com; 

 

    location / { 

 

        proxy_pass         http://a.com; 

 

        proxy_set_header   Host             $host; 

 

        proxy_set_header   X-Real-IP        $remote_addr; 

 

        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for; 

 

    } 

 

}

 

Save and restart nginx

 

B, C server nginx.conf settings

 

Open nginx.confi and add the following code to the http section

 

server{ 

 

    listen 80; 

 

    server_name a.com; 

 

    index index.html; 

 

    root /data0/htdocs/www; 

 

}

 

Save and restart nginx

 

test

 

When visiting a.com, in order to distinguish which server to turn to for processing, I wrote an index.html file with different content under the B and C servers respectively to distinguish.

 

Open the browser to visit a.com, refresh and you will find that all requests are allocated by the main server (192.168.5.149) to the B server (192.168.5.27) and the C server (192.168.5.126) respectively, achieving load balancing effect.

 

B server processes the page

 

 

 

C server processing page

 

 

 

What if one of the servers goes down?

 

When a server goes down, will it affect access?

 

Let's take a look at the example first. According to the above example, suppose the C server 192.168.5.126 is down (because the downtime cannot be simulated, so I shut down the C server) and then visit it again.

 

Visit result:

 

 

 

We found that although the C server (192.168.5.126) was down, it did not affect website access. In this way, there is no need to worry about dragging down the entire site due to a machine downtime in load balancing mode.

 

What if b.com also sets up load balancing?

 

Very simple, same as a.com setup. as follows:

 

Assuming that the main server IP of b.com is 192.168.5.149, load balancing to 192.168.5.150 and 192.168.5.151 machines

 

Now resolve the domain name b.com to 192.168.5.149IP.

 

Add the following code to the nginx.conf of the main server (192.168.5.149):

 

upstream b.com { 

 

      server  192.168.5.150:80; 

 

      server  192.168.5.151:80; 

 

 

  

 

server{ 

 

    listen 80; 

 

    server_name b.com; 

 

    location / { 

 

        proxy_pass         http://b.com; 

 

        proxy_set_header   Host             $host; 

 

        proxy_set_header   X-Real-IP        $remote_addr; 

 

        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for; 

 

    } 

 

}

 

Save and restart nginx

 

Set up nginx on 192.168.5.150 and 192.168.5.151 machines, open nginx.conf and add the following code at the end:

 

server{ 

 

    listen 80; 

 

    server_name b.com; 

 

    index index.html; 

 

    root /data0/htdocs/www; 

 

}

 

Save and restart nginx

 

After completing the following steps, the load balancing configuration of b.com can be realized.

 

Can't the main server provide the service?

 

In the above examples, we have applied the load balancing of the main server to other servers, so can the main server itself be added to the server list, so that it will not waste a server purely as a forwarding function, but also participate in to provide services.

 

As in the above case with three servers:

 

A server IP: 192.168.5.149 (main)

 

B server IP: 192.168.5.27

 

C server IP: 192.168.5.126

 

We resolve the domain name to the A server, and then the A server forwards it to the B server and the C server, then the A server only performs a forwarding function, and now we let the A server also provide site services.

 

Let's analyze first. If you add the main server to the upstream, the following two situations may occur:

 

1. The main server is forwarded to other IPs, and other IP servers handle it normally;

 

2. The main server forwards it to its own IP, and then goes to the main server to allocate the IP. If it is allocated to the local machine, it will cause an infinite loop.

 

How to solve this problem? Because port 80 has been used to monitor load balancing processing, the server can no longer use port 80 to process a.com access requests, and a new one must be used. So we add the following piece of code to the main server's nginx.conf:

 

server{ 

 

    listen 8080; 

 

    server_name a.com; 

 

    index index.html; 

 

    root /data0/htdocs/www; 

 

}

 

 

 

Restart nginx and enter a.com:8080 in the browser to see if it can be accessed. The result can be accessed normally

 

 

 

Since it can be accessed normally, we can add the main server to the upstream, but the port needs to be changed, as follows:

 

upstream a.com { 

 

      server  192.168.5.126:80; 

 

      server  192.168.5.27:80; 

 

      server  127.0.0.1:8080; 

 

}

 

Since the main server IP192.168.5.149 or 127.0.0.1 can be added here, it means accessing itself.

 

Restart Nginx, and then visit a.com again to see if it will be assigned to the main server.

 

 

 

 

 

The main server can also join the service normally.

 

finally

 

1. Load balancing is not unique to nginx. The famous Dingding apache also has it, but its performance may not be as good as nginx.

 

2. Multiple servers provide services, but the domain name is only resolved to the main server, and the real server IP can be obtained without being pinged, which increases certain security.

 

 

 

3. The IP in the upstream is not necessarily the internal network, but the external network IP can also be used. However, the classic case is that a certain IP in the LAN is exposed to the external network, and the domain name is directly resolved to this IP. Then the main server forwards it to the IP of the intranet server.

 

4. When a server is down, it will not affect the normal operation of the website, and Nginx will not forward the request to the IP that has been down.

 

The content comes from Xiaohongti technology blog, http://www.xiaohongti.com/  Please keep the address and respect the copyright.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326830118&siteId=291194637