Nginx process model and event handling mechanism

1 Nginx process model

There are two process models in nginx, namely master and worker processes. The master master process manages the worker process. Management includes:

  • Receive signals from the outside world and send signals to each worker process;
  • Monitor the running status of the worker process. When the worker process exits (under abnormal conditions), it will automatically fork the new worker process again. The Work process is mainly to process user requests.

The worker process can be configured into multiple in the core configuration file.

## ./conf/nginx.conf
worker_processes  6;

查看nginx进程
# ps -ef|grep nginx

The administrator sends instructions (stop/reload, etc.) to the master process, and the master distributes the instructions to each worker process for execution.

  • The woker process competes for requests from the client. The worker receives the close request and can only close it after the processing is over and the connection is released.
  • Worker processes are independent of each other, which is more secure than threads.
  • A request can only be processed in one worker process, and a worker process cannot process requests from other processes. Try to keep the number of worker processes consistent with the number of machine cpu cores to reduce process context switching, CPU resource competition, and cache failures.

2 Worker preemption mechanism

When a client request comes in, the worker process competes for the accept_mutex lock of the connection.
Insert picture description here

The meaning of accept_mutex: When a new connection arrives, if accept_mutex is activated, then multiple Workers will be processed in a serial manner. One of the Workers will be awakened, and the other Workers will continue to stay dormant; if accept_mutex is not activated, then All workers will be awakened, but only one worker can get a new connection, and the other workers will go to sleep again. This is the "shock group problem"

For Nginx, in general, worker_processes will be set to the number of CPUs, so there are only dozens of them at most. Even if a shock group problem occurs, the impact is relatively small.

Nginx activates accept_mutex by default (default), which is a conservative choice. If you turn it off, it may cause a certain degree of shocking group problems, manifested by increased context switching (sar -w) or increased load, but if your website has a large number of visits, for the sake of system throughput, I still recommend that you turn it off it.

3 Nginx event handling mechanism

Traditional server event processing uses a synchronous blocking AIO mechanism (such as Apache). When a client request comes in, worker1 processes the request. At this time, worker1 will block and cannot process other requests. For high-concurrency scenarios, a large number of workers need to be produced.

Nginx uses the Linux epoll model, which is asynchronous and non-blocking . A single worker can handle tens of thousands of requests at the same time, depending on the CPU and memory. When multiple clients request worker1, assuming that client1's request is blocked, worker1 can still process other client requests due to the asynchronous non-blocking mechanism.

Insert picture description here

events {
    # Linux默认使用epoll模型
    use epoll;
    # 设置每个worker允许最大客户端连接数
    worker_connections  1024;
}

Why try to keep the number of worker processes consistent with the number of CPU cores?

The number of requests that a worker process can handle at the same time is only limited by the size of the memory, and in terms of architecture design, there is almost no synchronization lock restriction when processing concurrent requests between different worker processes, and the worker process usually does not enter the sleep state. Therefore, When the number of processes on Nginx is equal to the number of CPU cores (it is best to bind a specific CPU core to each worker process), reduce process context switching as much as possible to avoid problems such as multiple worker processes preempting the CPU and cache invalidation.

When the value of worker_processes is auto, nginx will automatically detect the number of CPU cores of the current host and start the corresponding number of worker processes.

The work process binds a certain CPU core to use the worker_cpu_affinity instruction.

the end

Guess you like

Origin blog.csdn.net/LIZHONGPING00/article/details/112404967