Why can Nginx easily reach tens of thousands of concurrent?

Nginx is known for its high performance, stability, rich functions, simple configuration and low resource consumption. This article understands  why Nginx is so fast!

Nginx process model

 

Nginx server, during normal operation:

 

  • Multi-process: one Master process, multiple Worker processes.

  • Master process: manages the Worker process. External interface: receiving external operations (signals); internal forwarding: according to different external operations, manage Worker through signals; monitoring: monitor the running status of the worker process, and automatically restart the worker process after the worker process terminates abnormally.

  • Worker process: All worker processes are equal. Actual processing: network requests are processed by the Worker process. Worker process number : Configured in nginx.conf, generally set to the number of cores to make full use of CPU resources. At the same time, avoid excessive number of processes, avoid competition for CPU resources, and increase the loss of context switching.

 

HTTP connection establishment and request processing

The HTTP connection establishment and request processing process are as follows:

  • When Nginx starts, the Master process loads the configuration file.

  • The Master process initializes the listening socket.

  • Master process, Fork produces multiple worker processes.

  • The Worker process competes for a new connection, and the winner establishes a Socket connection through a three-way handshake and processes the request.

Nginx high performance, high concurrency

Why does Nginx have high performance and can support high concurrency?

  • Nginx uses multi-process + asynchronous non-blocking mode (IO multiplexing Epoll).

  • The complete process of the request: establish a connection → read the request → resolve the request → process the request → respond to the request.

  • The complete request process corresponds to the bottom layer: read and write Socket events.

Nginx event processing model

Request: HTTP request in Nginx.

 

Basic HTTP Web Server working mode:

  • Receiving request: Read the request line and request header line by line. After judging that the segment has a request body, read the request body.

  • Process the request.

  • Return response: Generate the corresponding HTTP request (response line, response header, response body) according to the processing result.

 

Nginx is also this routine, the overall process is the same:

Modular architecture

 

Nginx modules can be basically divided into the following types according to their functions:

 

① event module: builds a framework for an event processing mechanism independent of the operating system, and provides the processing of specific events. Including ngx_events_module, ngx_event_core_module and ngx_epoll_module etc.

Which event processing module Nginx uses depends on the specific operating system and compilation options.

② Phase handler: This type of module is also directly called the handler module. It is mainly responsible for processing client requests and generating content to be responded to, such as the ngx_http_static_module module, which is responsible for processing the client's static page request and preparing the corresponding disk file as the response content output.

③ output filter: also known as the filter module, which is mainly responsible for processing the output content and can modify the output.

For example, you can implement tasks such as adding a predefined footbar to all output html pages, or replacing the URL of the output image.

④ upstream: The upstream module realizes the function of reverse proxy, forwards the real request to the back-end server, reads the response from the back-end server, and sends it back to the client.

The upstream module is a special handler, but the response content is not really generated by itself, but read from the back-end server.

⑤load-balancer: Load balancing module, which implements a specific algorithm. Among the many back-end servers, select a server as the forwarding server for a certain request.

Analysis of common problems

Nginx vs Apache

Nginx:

  • IO multiplexing, Epoll (kqueue on freebsd)

  • high performance

  • High concurrency

  • Occupies less system resources

 

Apache:

  • Blocking + multi-process/multi-thread

  • More stable, less bugs

  • More modules

Scenes:

When processing multiple requests, you can use: IO 多路复用 or  阻塞 IO +多线程

1. IO multiplexing : 一个 线程Track the status of multiple sockets, whichever 就绪is read and written;

2. Blocking IO  +  multithreading : for each request, create a new service thread

Thinking : IO 多路复用 and  多线程 the applicable scene?

IO 多路复用: The request processing speed of a single connection has no advantage, suitable for  IO-intensive  scenarios, event-driven

  • Large concurrency : Only one thread is used to process a large number of concurrent requests, reducing the loss of context switching, and there is no need to consider concurrency issues, and relatively more requests can be processed;

  • Consume less system resources (not needed 线程调度开销)

  • Applicable to 长连接the situation (multi-threaded mode is 长连接easy to cause 线程过多, cause 频繁调度)

阻塞IO + 多线程: Simple implementation, no need to rely on system calls, suitable for  CPU-intensive  scenarios

  • Each thread needs time and space;

  • When the number of threads increases, the thread scheduling overhead increases exponentially

 

Nginx maximum number of connections

Basic background:

 

  • Nginx is a multi-process model, and the Worker process is used to process requests.

  • The number of connections for a single process (file descriptor fd) has an upper limit (nofile): ulimit -n.

  • Configure the maximum number of connections for a single worker process on Nginx: the upper limit of worker_connections is nofile.

  • Configure the number of worker processes on Nginx: worker_processes.

Therefore, the maximum number of connections for Nginx:

  • The maximum number of connections for Nginx: the number of worker processes x the maximum number of connections for a single worker process.

  • The above is the maximum number of connections when Nginx is used as a general server.

  • When Nginx is used as a reverse proxy server, the maximum number of connections that can be served: (number of worker processes x maximum number of connections for a single worker process) / 2.

  • When Nginx reverse proxy, it will establish the connection of Client and the connection of backend Web Server, occupying 2 connections.

Concurrent processing capabilities of Nginx

Regarding the concurrent processing capacity of Nginx: the number of concurrent connections, after general optimization, the peak value can be maintained at about 1~3w. (Memory and CPU core numbers are different, there will be room for further optimization).

 

Guess you like

Origin blog.csdn.net/bjmsb/article/details/108503103