Nginx architecture and workflow

     NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as IMAP / POP3 proxy server. NGINX its high performance, stability, rich feature set, simple configuration, and low resource consumption and famous, but also to solve the C10K problem one server and write. This paper describes the architecture and workflow Nginx.

A, Nginx architecture below

 

 

After 1.nginx will start a master process and multiple worker processes (the number of woeker process can be configured, the same number of cores general settings and machine), the master process is responsible for managing worker process (receiving external signal, a signal is sent to each worker process , monitoring worker processes running state).

2. Basic network events, the worker responsible for handling the process is independent and peer between the worker processes together to compete to the client's request.

3.nginx is a multi-process model, event-driven asynchronous non-blocking IO model.

Second, the advantages of multi-process nginx + asynchronous non-blocking IO model

1. process independent of each other, a process abnormal, other processes will not be affected, to continue services to ensure the stability of the service.

2. The independent resource isolation between process and avoid a lot of unnecessary lock operations, improve processing efficiency.

3. To avoid the common problem of context switches multi-threading model, although the multi-process model will lead to lower number of concurrent service, but asynchronous non-blocking IO solve this problem.

 

Third, how to collaborate between multiple processes

'1. The multi-process work may produce a thundering herd effect ' problem

          Multi-process mode of operation as shown below:

         

          A connection comes in, each worker process are likely to deal with this connection, how to do it? After the first of each worker process by the master process fork over, in which the master process, you need to first establish a listen socket, then fork out more worker processes, so that each worker process can accept this socket (note: not the same socket, just each process to obtain a socket connection from the same ip address and port), so, after a request comes in, all accept worker on this socket will receive a notification, but only one receives the request and worker success processing, other worker receives will fail, this is the ' thundering herd effect '.

          In addition, this mode will lead to another problem, that is 'load imbalance between worker'! Multiple worker while fighting for the requested task, if any worker processes more diligent, but luck is better, then other processes may be very few work opportunities, these diligent process is very busy, and thereby reducing the concurrent server performance. So Nginx is how to solve these two problems?

2.Nginx between the worker cooperative solutions to problems

 

         ngx_posted_accept_events (accept event) will be generated into the epoll queue accept ordinary event ngx_posted_events (read event) into the posted queue worker processes are to get a connection from the event queue accept, get each worker own event processing from the posted queue.

         a. In the event while obtaining a connection from the accept queue, you need to acquire the lock (accept_mutex lock), and only get locked in the process can take from event connection accept queue, thus avoiding the ' thundering herd effect '.

         b. above lock preemption determination value is determined based on load threshold worker processes (the total number of connection processing has not reached the total number of connections <worker_connections> 7/8, the lock will not grab are not achieved, direct access to the event, the need is reached first get the lock and then get events), so as to solve the load imbalance between worker issues.

 

Guess you like

Origin www.cnblogs.com/green-technology/p/nginx_first.html