Request processing flow Nginx

Because Nginx runs within the outermost layer of the enterprise network is edge node, then the flow is several times that he treated other application servers to handle the traffic, and even several orders of magnitude, we know that any kind of problem in a different order of magnitude, he the solution is completely different, so it handles application scenarios Nginx, all problems will be magnified, so we have to go to understand why Nginx uses an architectural model of such master-worker, why the worker process to match the amount and number of cores of the CPU? When we need to share data between multiple worker processes, why TLS or limit, speed limit such a scenario, their share is somewhat different way, then these need we have a framework for Nginx a clear understanding.

Let us first look at the Nginx request processing flow.

Why go to the request processing flow Nginx in it? Because in fact, before we learned that Nginx will record access logs and error logs, can also handle static resources, you can also do the reverse proxy, then these things we see him exactly how to deal with these requests from internal Nginx, it It contains some of what part of it?

 

We left most from this picture, the most left in the WEB, EMAIL and TCP, that is to say there are basically three traffic from here after Nginx, we Nginx has three large state machine is a processing of TCP / UDP the transport layer state machine and process application layer and layer 4 HTTP status MAIL state machine processes mail.

So why do we call it a state machine? Nginx is because the core of this large green box he was driving with a non-blocking event processing engine as we know it is to use epoll, then later once we use this asynchronous processing engine, usually need to use a state machine to the request correct identification and handling.

Based on such a state event handler, when we need to request access to static resources parse out, we go to see the bottom left of the arrow, then it found a static resource, and if we do the reverse proxy when it , then the reverse proxy content, I can do a disk cache, the cache to disk, but also below the lower left of this line, but when we deal with static resources, there will be a problem is when the whole memory is not sufficient to fully cache all documents and information of time, then such a call or AIO will be like send file degenerated into blocking disk calls, so here we need to have a thread pool to handle, for each processed request it, we will enter the access log or error log.

So here is into the disk, of course, we can put into the machine by remote syslog protocol it, then more often as our Nginx load balancer or reverse proxy to use is that we can put a request by level protocol (HTTP, Mail and stream (TCP)) is transmitted to the back of the server, can also, for example, some protocols (FastCGI, uWSGI, SCGI, memcached) to a corresponding application layer proxy application server. These are Nginx request processing flow.

Guess you like

Origin www.cnblogs.com/momenglin/p/11923452.html