Nginx quick start Nginx reverse proxy and load balancing

concept

What is a reverse proxy and what is the difference between a forward proxy?

Forward proxy refers to the addition of a proxy server between the client and the target server. The client directly accesses the proxy server, and the proxy server accesses the target server and returns to the client. During this process, the client needs to know the proxy server address and configure the connection. As shown:

Nginx quick start Nginx reverse proxy and load balancing

Reverse proxy means that the client accesses the target server, and there is a unified access gateway inside the target service to forward the request to the server that is actually processed by the backend and return the result. During this process, the client does not need to know the address of the proxy server, and the proxy is transparent to the client. As shown:

Nginx quick start Nginx reverse proxy and load balancing

The difference between forward proxy and reverse proxy:

Nginx quick start Nginx reverse proxy and load balancing

Nginx reverse proxy

Nginx forward proxy only needs to configure the proxy_pass attribute in the location. It points to the proxy's server address. As the Baidu agent configured in our previous article:

Nginx quick start Nginx reverse proxy and load balancing

This is an example of a classic forward proxy.

Here is an example of configuring a reverse proxy. First, start two applications, the ports are 8010 and 8020, and visit /hello:

Nginx quick start Nginx reverse proxy and load balancing

Nginx quick start Nginx reverse proxy and load balancing

Then we in nginx, first reverse proxy an address:

Nginx quick start Nginx reverse proxy and load balancing

As you can see, listening on port 80, the proxy is the port address of 8010, and the path is /hello. Then configure the local host:

Nginx quick start Nginx reverse proxy and load balancing

Then you can see that the reverse proxy is successful:

Nginx quick start Nginx reverse proxy and load balancing

It can be seen that there is little difference in configuration between forward proxy and reverse proxy. The main difference is the location of the proxy service. One is remote (Baidu) and the other is inside the local area network (local 8010).

Look at the proxy related parameters:

proxy_pass # proxy service

proxy_redirect off; # Whether to allow redirection

proxy_set_header Host $host; # Pass the header parameter to the backend service

proxy_set_header X-Forwarded-For $remote_addr; # Set the request header that is the client IP address

proxy_connect_timeout 90; # Connection proxy service timeout time

proxy_send_timeout 90; # Maximum request sending time

proxy_read_timeout 90; # read maximum time

proxy_buffer_size 4k;

proxy_buffers 4 32k;

proxy_busy_buffers_size 64k;

proxy_temp_file_write_size 64k;

Another purpose of reverse proxy is to configure multiple second-level domain names on multiple sites. For example, Baidu’s search engine is www.baidu.com, Baidu’s picture is image.baidu.com, and Baidu translation is fanyi.baidu.com. It can be seen that the same first-level domain name but different second-level domain names represent different systems. This way can also be achieved using nginx reverse proxy,

Nginx quick start Nginx reverse proxy and load balancing

Nginx quick start Nginx reverse proxy and load balancing

It can be seen that two different second-level domain names access two different port addresses, and the access effect is as follows:

Nginx quick start Nginx reverse proxy and load balancing

Nginx quick start Nginx reverse proxy and load balancing

In this way, multiple projects can be deployed on one server, each with a meaningful second-level domain name.

Load balancing

Through proxy_pass, requests can be proxied to back-end services, but in order to achieve higher load and performance, we usually have multiple back-end services. This is the time to achieve load balancing through the upstream module. Let's configure an upstream:

Nginx quick start Nginx reverse proxy and load balancing

You can see that the two service addresses are defined in an upstream and named wwwdemocom, and the reverse proxy is set up below:

Nginx quick start Nginx reverse proxy and load balancing

In the configuration of the reverse proxy, you only need to configure the name of the upstream. Refresh the page, you can see the same address, and keep visiting two items separately:

Nginx quick start Nginx reverse proxy and load balancing

Nginx quick start Nginx reverse proxy and load balancing

In this way, if we deploy multiple identical systems, we can divide all concurrent traffic into multiple ones, so as to achieve the purpose of improving concurrency.

At this time, the two systems are switching access, and the policy is by default in rotation, that is, no matter how many systems are in rotation, the access is repeated. If you want the number of visits to the two machines to be distributed 1:2, you can set the weight:

Nginx quick start Nginx reverse proxy and load balancing

Refresh the page and you can see that 8010 visits once, followed by 8020 visits twice. This is based on weight distribution, which changes the original default rotation strategy. This kind of situation is suitable for different machine performance.

Let's take a look at upstream related parameters:

service reverse service address plus port

weight

How many times max_fails failed. If the host is considered to be down, kick out

fail_timeout Re-detection time after kicking out

backup service

max_conns allows the maximum number of connections

Let's look at an example of using the backup parameter to demonstrate the backup service, and start another 8030 service on the server:

Nginx quick start Nginx reverse proxy and load balancing

Use this service as a backup service. When the service stops unexpectedly, this is the top one! Look at the nginx configuration:

Nginx quick start Nginx reverse proxy and load balancing

Restart nginx, refresh the page to check the effect, and find that the two configured 8010 and 8020 are still doing load balancing. Now stop 8010 and 8020, you can see that the 8030 is on top:

Nginx quick start Nginx reverse proxy and load balancing

Note: Turning off a service will not trigger a backup service. Only if both 8010 and 8020 stop unexpectedly, 8030 will come up!

If you restart the 8010, the backup service will not work, and the access to the 8010 is still:

Nginx quick start Nginx reverse proxy and load balancing

Let me talk about the two parameters of max_fails and fail_timeout. After the original 8010 server is restarted, nginx can quickly detect that the original 8010 is restored and put it into use. This is because there is a similar heartbeat detection mechanism between nginx and 8010, real-time Monitor whether 8010 can be used. The above max_fails parameter is how many times the setting fails, and it is considered to be hung up, and it is removed. The fail_timeout parameter is to re-detect after removal. This property is also to set the time interval for detection when recovering. For example, if it is set to one minute, then after the service is restored, there will be an interval of one minute. To be detected and put into use. As you can imagine, if you check in real time, it will consume performance, so these two parameters are very useful. Below we set the maximum number of failed retries to 10, and the recovery detection time to 30 seconds:

Nginx quick start Nginx reverse proxy and load balancing

Reload, then stop 8010 and 8020, visit the page, you can see that the result of 8030 will appear after loading for a while, if the number of retries is set to 100, the loading time will be more obvious. When we start 8010, we can see that the first 30 seconds is still 8030, and then we start to visit 8010 again:

Nginx quick start Nginx reverse proxy and load balancing

These two parameters are also very important parameters and will be used in the actual generation environment. Reasonably set the number of retries and waiting time.

max_conns is the maximum number of connections allowed by the project service, which is easy to understand.

Load balancing algorithm

The above introduces simple load balancing, but also involves simple load balancing and weighted load balancing algorithms for round-robin training. The following specifically introduces the upstream load balancing algorithm:

ll+weight: Polling weight (default) The disadvantage is that if one of the connections is full, nginx will still add connections in the past, resulting in more and more connections, slower and slower speeds, and not enough automation and intelligence

ip_hash: Based on Hash calculation, it is used to maintain session consistency and solve the problem of distributed session synchronization. The disadvantage is that many communities or schools now use a fiber. Most people have the same ip, which can easily lead to a certain service connection. Too many, unbalanced load occurs, and the machine hangs, the session will be lost

url_hash: static resource cache, save storage, speed up (third-party), and the calculation method of ip_hash is the same, the application scenarios are different, through this algorithm, some static resources can be evenly distributed to each service, saving space

least_conn: the least link (third party), the request can be allocated to the service with the least connection with nginx first, and the resource can be maximized

least_time: The minimum 3 response time, calculate the average response time of the node, and then take the one with the fastest response, assign a higher weight (third party), and make full use of the machine with good performance

The above demonstrations are all weighted polling, let's look at the ip_hash algorithm:

Nginx quick start Nginx reverse proxy and load balancing

Note that in addition to removing the weight, there is also commenting out the backup service. The two cannot be used at the same time. Now reload to see the effect:

Nginx quick start Nginx reverse proxy and load balancing

The effect is that if you visit one place, only one service will appear, and you will no longer visit other services. This is the effect of the ip_hash algorithm. Using other machines to access, there will be the effect of only accessing one of them:

Nginx quick start Nginx reverse proxy and load balancing

Other algorithms are no longer demonstrated.

Nginx quick start Nginx reverse proxy and load balancing

In addition, there are some knowledge points about the development of c++ Linux background server: Linux, Nginx, MySQL, Redis, P2P, K8S, Docker, TCP/IP, coroutine, DPDK, webrtc, audio and video and other videos, you can add groups : 832218493 Get it for free!
Nginx quick start Nginx reverse proxy and load balancing

Nginx quick start Nginx reverse proxy and load balancing

Guess you like

Origin blog.csdn.net/lingshengxueyuan/article/details/111409877