Nginx [reverse proxy load balancing dynamic and static separation] - next

Nginx [reverse proxy load balancing dynamic and static separation] – next

Nginx working mechanism & parameter setting

master-worker mechanism

schematic diagram

insert image description here

diagram

  1. A master manages multiple workers
    insert image description here

A word about the master-worker mechanism

● Schematic diagram of the scramble mechanism

insert image description here

diagram

  1. A master Process manages multiple worker processes, that is to say, Nginx adopts a multi-process structure instead of a multi-thread structure.
  2. When the client makes a request (task), the master Process notifies the managed worker process
  3. The worker process starts to scramble for tasks, and the worker process scrambled will open the connection and complete the task
  4. Each worker is an independent process, and each process has only one main thread
  5. Nginx adopts the IO multiplexing mechanism (required in the Linux environment), using the IO multiplexing mechanism is the key to Nginx's ability to achieve high concurrency with a small number of worker processes

Second, the master-worker mechanism

schematic diagram

insert image description here

Explanation for the above figure

● Master-Worker mode

1. After Nginx starts, there will be a master process and multiple independent worker processes.

2. The Master process receives signals from the outside world and sends signals to each worker process, and each process may handle the connection.

3. The Master process can monitor the running status of the Worker process. When the worker process exits (under abnormal circumstances), it will automatically start a new worker process.

● accept_mutex solves "thrilling herd phenomenon"/theory

1. All child processes inherit the sockfd of the parent process. When a connection comes in, all child processes will receive a notification and "compete" to establish a connection with it. This is called the "shocking group phenomenon".

2. A large number of processes are activated and suspended, and only one process can accept() to this connection, which will consume system resources.

3. Nginx provides an accept_mutex, which is a shared lock added to accept. That is, each worker process needs to obtain a lock before executing accept, and if it cannot obtain it, it will give up executing accept(). With this lock, at the same time, only one process will go to accpet(), and there will be no shocking group problem.

4. When a worker process accepts the connection, it starts to read the request, parse the request, process the request, generate data, and then return to the client, and finally disconnects to complete a complete request.

5. A request is completely processed by the worker process, and can only be processed in one worker process.

● Benefits/theory of using a multi-process structure instead of a multi-thread structure

1. Save the overhead caused by locks, each worker process is an independent process, does not share resources, and does not need to be locked. It will also be much more convenient when programming and checking problems.

2. Independent process to reduce risk. Using independent processes will not affect each other. After one process exits, other processes are still working, and the service will not be interrupted. The master process will quickly restart the new worker process.

● Realize high concurrency secret-IO multiplexing

1. For Nginx, a process has only one main thread, so how does it achieve high concurrency?

2. The principle of IO multiplexing is adopted, and the asynchronous non-blocking event processing mechanism and epoll model are adopted to realize lightweight and high concurrency

3. How does nginx implement it? For example: every time a request comes in, there will be a worker process to process it. But not the whole process, to what extent? Handle the place where blocking may occur, such as forwarding the request to the upstream (backend) server and waiting for the request to return. Then, the processing worker will not wait so stupidly, he will register an event after sending the request: "If the upstream returns, let me know, and I will continue to do it." So he went to rest. At this time, if there is another request coming in, he can quickly process it in this way again. Once the upstream server returns, this event will be triggered, the worker will take over, and the request will continue. Due to the nature of the work of the web server, most of the life of each request is in network transmission. In fact, the time slice spent on the server machine is not much. This is the secret of high concurrency that can be solved by a few processes.

summary

Advantages of Nginx's master-worker working mechanism

1. Support nginx -s reload hot deployment, this feature we have used before

2. For each worker process, an independent process does not need to be locked, so the overhead caused by the lock is saved. At the same time, it will be much more convenient when programming and finding problems

3. Each worker is an independent process, but there is only one main thread in each process, which handles requests through asynchronous non-blocking/IO multiplexing, even high concurrent requests can be handled.

4. Independent processes are used, which will not affect each other. After a worker process exits, other worker processes are still working, and the service will not be interrupted. The master process will quickly start a new worker process

5. A worker allocates a CPU, so the worker's thread can maximize the performance of a CPU

parameter settings

worker_processes

● How many workers need to be set

  • Each worker thread can maximize the performance of a CPU. Therefore, it is most appropriate that the number of workers is equal to the number of CPUs of the server. Setting less will waste the CPU, and setting too much will cause the loss caused by frequent switching contexts of the CPU.
  • Set the number of workers. Nginx does not enable the use of multi-core cpu by default. You can make full use of the performance of multi-core cpu by increasing the worker_cpu_affinity configuration parameter
#2 核cpu,开启2 个进程
worker_processes 2;
worker_cpu_affinity 01 10;

#2 核cpu,开启4 个进程,
worker_processes 4;
worker_cpu_affinity 01 10 01 10;

#4 核cpu,开启2 个进程,0101 表示开启第一个和第三个内核,1010 表示开启第二个和第四个内核;
worker_processes 2;
worker_cpu_affinity 0101 1010;

#4 个cpu,开启4 个进程
worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000;

#8 核cpu,开启8 个进程
worker_processes  8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

worker_cpu_affinity understanding

insert image description here

Configuration example

insert image description here

  1. Reload nginx /usr/local/nginx/sbin/nginx -s reload
  2. View the worker process of nginx

insert image description here

worker_connection

  1. worker_connection represents the maximum number of connections that each worker process can establish, so the maximum number of connections that can be established by an nginx should be worker_connections * worker_processes

​ (1) Default: worker_connections: 1024

​ (2) Increase: worker_connections: 60000, (adjust to 60,000 connections)

(3) At the same time, it should be adjusted according to the maximum number of open files in the system.

系统的最大打开文件数>= worker_connections*worker_process 根据系统的最大打开文件数来调整,worker_connections 进程连接数量要小于等于系统的最大打开文件数,worker_connections 进程连接数量真实数量=worker_connections * worker_process

查看系统的最大打开文件数
ulimit -a|grep "open files"
open files (-n) 		65535
  1. Calculate the maximum number of concurrency based on the maximum number of connections: if it is a browser that supports http1.1, it will take up two connections per visit, so the maximum number of concurrency for ordinary static access is: worker_connections * worker_processes /2, and if it is HTTP as the reverse For agents, the maximum number of concurrency should be worker_connections *worker_processes/4. Because as a reverse proxy server, each concurrency will establish a connection with the client and a connection with the backend
    service, which will occupy two connections.

look at a diagram

insert image description here

Configure the maximum number of open files in Linux

  1. Use ulimit -a to view all limit values ​​of the current system, and use ulimit -n to view the current maximum number of open files.
  2. The newly installed linux defaults to only 1024. When it is used as a server with a heavy load, it is easy to encounter error: too many open files. Therefore, it needs to be enlarged.
  3. Use ulimit -n 65535 to modify it immediately, but it will be invalid after restarting. (Note ulimit -SHn 65535 is equivalent to ulimit -n 65535, -S means soft, -H means hard)
  4. There are three modification methods as follows:
    1. Add a line ulimit -SHn 65535 in /etc/rc.local
    2. Add a line ulimit -SHn 65535 in /etc/profile
    3. Add the following two lines of records at the end of /etc/security/limits.conf
      1. *soft nofile 65535
      • *hard nofile 65535

Using the first method in CentOS has no effect, using the third method has an effect, and using the second method in Debian has an effect
5. Reference: https://blog.csdn.net/weixin_43055250/article/details/124980838

Build a high-availability cluster

Keepalived+Nginx high availability cluster (master-slave mode)

Cluster Architecture Diagram

insert image description here

interpret

1. Prepare two nginx servers, one as the main server and one as the backup server

2. The IP addresses of the two Nginx servers can be configured by yourself, not necessarily the same as mine (specifically, you can use the ifconfig command)

3. Install keepalived to ensure the communication between master and slave

4. Provide a unified access IP (virtual IP-VIP) to the outside world

schematic diagram

insert image description here

Specific construction steps

Build a high-availability cluster basic environment

Prepare two Linux servers 192.168.198.130 and 192.168.198.131

  1. can be done by cloning
  2. You can also directly copy a
    insert image description here

On two Linux servers, install and configure Nginx

  1. The steps of installing and configuring Nginx have been mentioned earlier, if you cloned Linux, Nginx itself has been installed, and you can use it directly.
  2. Verify that the installation is successful, you can access Nginx through IP on windows,
  3. Because we copied a copy of Linux, and the IP of the new Linux has changed, we need to modify the IP address in the nginx.conf file of the cloned Linux
    insert image description here

insert image description here

insert image description here

insert image description here

On two Linux servers, install keepalived

  1. Download keepalived-2.0.20.tar.gz source installation package, https://keepalived.org/download.html

insert image description here

  1. Upload to the two Linux /root directories

insert image description here

  1. mkdir /root/keepalived
  2. Unzip the file to the specified directory: tar -zxvf keepalived-2.0.20.tar.gz -C ./keepalived
  3. cd /root/keepalived/keepalived-2.0.20
  4. ./configure --sysconf=/etc --prefix=/usr/local

Note: Put the configuration file in the /etc directory, and the installation path is in /usr/local

  1. make && make install

Description: Compile and install
8) If successful, keepalived will be installed [you can check it]

Note: The configuration directory of keepalived is in /etc/keepalived/keepalived.conf

The startup command of keepalived is in /usr/local/sbin/keepalived

  1. Tip: Keepalived must be installed on both Linux

Complete the high availability cluster configuration

1. Designate one of Linux (such as 192.168.198.130) as Master: vi/etc/keepalived/keepalived.conf

insert image description here

insert image description here

2. Designate one of the Linux (such as 192.168.198.131) as Backup (backup server): vi/etc/keepalived/keepalived.conf

insert image description here

insert image description here

3. Start the keepalived command of two Linux: /usr/local/sbin/keepalived

4. Observe whether the ens33 of the two linux has been bound to 192.168.198.18

insert image description here

insert image description here

Notes and Details

1. After keepalived is started, the VIP cannot be pinged, and it prompts ping: sendmsg: Operation not permittedhttps://blog.csdn.net/xjuniao/article/details/101793935

2. nginx+keepalived configuration instructions and pits to avoid https://blog.csdn.net/qq_42921396/article/details/123074780

test

1. First, ensure that Windows can connect to the virtual IP 192.168.198.18
insert image description here

2. Access nginx as shown in the figure

insert image description here

Explanation: As you can see, because 192.168.198.130 is the Master and its priority is high, so the Nginx at 192.168.198.130 is accessed, and it still supports load balancing.

3. Stop the keepalived service of 192.168.198.130, otherwise shut down the 192.168.198.130 host directly, visit http://192.168.198.18/search/cal.jsp again, at this time the virtual IP binding drifts, and binds to 192.168.198.131 Backup Service, the access effect is as shown in the figure,
here directly disable the keepalived of 192.168.198.130 Master to test

insert image description here

insert image description here

Automatically detect Nginx anomalies and terminate keepalived

Implementation steps

1. Write a shell script: vi /etc/keepalived/ch_nginx.sh Brief description: The following script is to count the number of lines of ps -C nginx --no-header, if it is 0, it means that nginx has terminated abnormally, and then execute killall keepalived

#!/bin/bash
        num=`ps -C nginx --no-header | wc -l`
        if [ $num -eq 0 ];then
        killall keepalived
fi

Modify ch_nginx.sh permissions

chmod 755 ch_nginx.sh

Modify the 192.168.198.130 main Master configuration file

Command: vi /etc/keepalived/keepalived.conf

insert image description here

4. Restart the keepalived of the 192.168.198.130 Master. At this time, because the Master has a high priority, it will compete for VIP priority binding.

insert image description here

insert image description here

5. Manually close the Nginx of 192.168.198.130 Master

insert image description here

Note that keepalived is also terminated

insert image description here

6. Visit nginx again and find that the virtual IP 192.168.198.18 is bound to the backup server 192.168.198.131.

insert image description here

Precautions

The keepalived vrrp_script script does not execute the solution

- Open log observation

tail -f /var/log/messages

- restart keepalived

systemctl restart keepalived.service

– Explain, there have been times when the file cannot be found, you can modify the file name of the execution script, if there is no _, it is OK

  1. If you configure a script to regularly check Nginx exceptions, you need to start nginx first, and then start keepalived, otherwise keepalived will be killed if it starts together
  2. Reminder: Friends will encounter various problems when configuring, just solve them in a targeted manner

Detailed configuration file keepalived.conf

#这里只注释要修改的地方
global_defs {
    notification_email {
    	[email protected]  #接收通知的邮件地址
        }
        notification_email_from [email protected] #发送邮件的邮箱

        smtp_server 192.168.200.1 #smtp server 地址
        smtp_connect_timeout 30

        router_id Node132 #Node132 为主机标识
       vrrp_skip_check_adv_addr

        #vrrp_strict 	#这里需要注释,避免虚拟ip 无法ping 通
        
        vrrp_garp_interval 0
        vrrp_gna_interval 0
    }
    vrrp_instance VI_1 {
    
        state MASTER #主节点MASTER 备用节点为BACKUP
        
        interface ens33 #网卡名称
        
        virtual_router_id 51 #VRRP 组名,两个节点的设置必须一样,指明属于同一VRRP 组
        
        priority 100 #主节点的优先级(1-254 之间),备用节点必须比主节点优先级低
        

         advert_int 1 #组播信息发送间隔,两个节点设置必须一样

            authentication { #设置验证信息,两个节点必须一致
                auth_type PASS
                auth_pass 1111
            }
            virtual_ipaddress { #指定虚拟IP, 两个节点设置必须一样

                192.168.200.16
            }
}

Guess you like

Origin blog.csdn.net/apple_67445472/article/details/131142526