Architect Explained: Nginx Architecture

Introduction: As we all know, Nginx server is a high-performance web and reverse proxy server. Nginx still maintains a good momentum of development in the fierce Web server competition, and once became a late star in the Web server market, all of which are inseparable from the architecture design of Nginx.

1. Nginx modular design

A highly modular design is the architectural foundation of Nginx. The Nginx server is decomposed into multiple modules, each of which is a functional module and is only responsible for its own functions. The modules strictly follow the principle of "high cohesion, low coupling".

Architect Explained: Nginx Architecture

  • core module

The core module is an indispensable module for the normal operation of the Nginx server, providing core functions such as error logging, configuration file parsing, event-driven mechanism, and process management.

  • Standard HTTP Module

The standard HTTP module provides functions related to HTTP protocol parsing, such as: port configuration, web page encoding settings, HTTP response header settings, etc.

  • Optional HTTP module

The optional HTTP module is mainly used to extend the standard HTTP functions, allowing Nginx to handle some special services, such as: Flash multimedia transmission, parsing GeoIP requests, SSL support, etc.

  • Mail service module

The mail service module is mainly used to support Nginx's mail service, including support for POP3 protocol, IMAP protocol and SMTP protocol.

  • third-party modules

The third-party module is to extend the Nginx server application and complete the developer's custom functions, such as: Json support, Lua support, etc.

2. Nginx request processing method

Nginx is a high-performance web server capable of handling a large number of concurrent requests at the same time. It combines the multi-process mechanism and the asynchronous mechanism. The asynchronous mechanism uses the asynchronous non-blocking method. Next, I will introduce the multi-threading mechanism and asynchronous non-blocking mechanism of Nginx.

  • multi-Progress

Whenever the server receives a client. There is a server master process (master process) that generates a sub-process (worker process) to establish a connection with the client to interact, until the connection is disconnected, the sub-process ends. The advantage of using processes is that each process is independent of each other and does not need to be locked, which reduces the impact of using locks on performance, reduces programming complexity, and reduces development costs. Secondly, the use of independent processes can prevent the processes from affecting each other. If one process exits abnormally, other processes work normally, and the master process quickly starts a new worker process to ensure that the service department is interrupted and the risk is reduced to lowest. The disadvantage is that the operating system needs to perform operations such as memory copying to generate a child process, which will generate a certain overhead in terms of resources and time; when there are a large number of requests, the system performance will be degraded.

  • Asynchronous non-blocking

Each worker process uses an asynchronous non-blocking method and can handle multiple client requests. When a worker process receives the client's request, it calls IO for processing. If the result cannot be obtained immediately, it will process other requests (that is, non-blocking); and the client does not need to wait for a response during this period, and can process it. Other things (ie, asynchronous); when IO returns, the worker process is notified; the process is notified and temporarily suspends the currently processed transaction in response to client requests.

3. Nginx event-driven model

In Nginx's asynchronous non-blocking mechanism, the worker process will process other requests after calling IO, and will notify the worker process when the IO call returns. For such system calls, it is mainly implemented using the event-driven model of the Nginx server.

Architect Explained: Nginx Architecture

As shown in the figure above, Nginx's event-driven model consists of three basic units: event collector, event sender, and event handler. Among them, the event collector is responsible for collecting various IO requests of the worker process, the event sender is responsible for sending IO events to the event handler, and the event handler is responsible for responding to various events.

The event sender puts each request into a list of pending events and calls the "event handler" to handle the request using non-blocking I/O. Its processing method is called "multi-channel IO multiplexing method", and there are three common ones: select model, poll model, and epoll model.

I specially sorted out the above technologies. There are many technologies that can’t be explained clearly in a few words, so I just asked a friend to record some videos. The answers to many questions are actually very simple, but the thinking and logic behind them are not simple. If you know it, you also need to know why. If you want to learn Java engineering, high performance and distributed, explain the profound things in simple language. Friends of microservices, Spring, MyBatis, and Netty source code analysis can add my Java advanced group: 433540541. In the group, there are Ali Daniel live-broadcasting technology and Java large-scale Internet technology videos for free to share with you.

4. Nginx design architecture

Nginx server uses master/worker multi-process mode. The process of multi-thread startup and execution is as follows: after the main program Master process is started, it receives and processes external signals through a for loop; the main process generates sub-processes through the fork() function, and each sub-process executes a for loop to realize the Nginx server. Event reception and processing.

It is generally recommended that the number of worker processes be the same as the number of cpu cores, so that there are not a large number of sub-process generation and management tasks, which avoids the competition between processes for CPU resources and the overhead of process switching. Moreover, in order to make better use of the multi-core feature, Nginx provides the binding option of cpu affinity. We can bind a certain process to a certain core, so that the cache will not be invalidated due to process switching.

For each request, one and only one worker process handles it. First, each worker process is fork from the master process. In the master process, the socket (listenfd) that needs to be listened is established first, and then multiple worker processes are fork. The listenfd of all worker processes will become readable when a new connection arrives. To ensure that only one process handles the connection, all worker processes grab accept_mutex before registering the listenfd read event, and the process that grabs the mutex registers for the listenfd read event. Call accept in the read event to accept the connection. When a worker process accepts the connection, it begins to read the request, parse the request, process the request, generate data, and then return it to the client, and finally disconnect the connection, such a complete request is like this. We can see that a request is completely handled by the worker process, and only in one worker process.

Architect Explained: Nginx Architecture

During the operation of the Nginx server, the master process and the worker process need process interaction. Interaction relies on the pipes implemented by Socket to achieve.

  • Master-Worker interaction

This pipeline is different from ordinary pipelines. It is a one-way pipeline from the main process to the worker process, including the instructions sent by the main process to the worker process, the worker process ID, etc.; at the same time, the main process communicates with the outside world through signals; each child process has The ability to receive signals and handle corresponding events.

  • worker-worker 交互

This interaction is basically the same as the Master-Worker interaction, but through the main process. Worker processes are isolated from each other, so when the worker process W1 needs to send an instruction to the worker process W2, it first finds the process ID of W2, and then writes the correct instruction to the channel pointing to W2. W2 receives the signal and takes appropriate action.

V. Summary

Through this article, we have an overall understanding of the overall architecture of the Nginx server. including its modular design,

Multi-process and asynchronous non-blocking request processing, event-driven model, etc. Through these theoretical knowledge, it will be of great help for us to learn the source code of Nginx in the future; it is also recommended that you take a look at the source code of Nginx to better understand the design idea of ​​Nginx.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324989559&siteId=291194637