On the principle of event processing nginx

   nginx是一款自由的、开源的、高性能的HTTP服务器和反向代理服务器,作为HTTP服务器的后起之秀,相比较于web服务器软件老大哥Apache有着很大的改进地方,主要在性能方面NGINX占用的系统资源更少,支持更多的并发连接数(特别是在静态小文件场景下),达到更高的访问效率。在功能上NGINX不但是个优秀的web服务器软件,还具有反向代理负载均衡,相当于LVS,Haproxy。缓存服务器相当于Squid等专业的缓存服务器软件
   Nginx的负载均衡模块采用两种方法:
   轮转法,它处理请求就像纸牌游戏一样从头到尾分发;
   IP哈希法,在众多请求的情况下,它确保来自同一个IP的请求会分发到相同的后端服务器。

   主要介绍nginx提供Web服务是处理事件的原理
   Nginx在启动时会以daemon形式在后台运行,采用多进程+异步非阻塞IO事件模型来处理各种连接请求。多进程模型包括一个master进程,多个worker进程,一个worker进程只有一个主线程(避免线程切换),一般worker进程个数是根据服务器CPU核数来决定的,这里面的原因与nginx的进程模型以及事件处理模型是分不开的 。也可以在配置文件中指定worker进程的数量。master进程负责管理Nginx本身和其他worker进程。如图所示
         ![](https://s1.51cto.com/images/blog/201907/03/64d747a08374bf1b015c2a00e1148244.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
         那么几个worker进程是如何高效的处理上万个事件请求的呢?
   这是因为nginx事件处理机制决定的,多进程模型+异步非阻塞模型才是高效处理的关键。单纯的多进程模型会导致连接并发数量的降低,而采用异步非阻塞IO模型很好的解决了这个问题;并且还因此避免的多线程的上下文切换导致的性能损失。
   master进程
         主要用来管理worker进程,包含:接收来自外界的信号,向各worker进程发送信号,监控worker进程的运行状态,当worker进程退出后(异常情况下),会自动重新启动新的worker进程。

master the whole process acts as a process group interaction with the user interface, while on the guardianship process. It does not handle network events, is not responsible for the implementation of the business, will only be achieved by managing worker process to restart the service, smooth upgrade, replace the log files, configuration files and other functions with immediate effect.
We have to control nginx, only need to kill to send signals to the master process on the line. Such as kill -HUP pid, is to tell nginx, calmly restart nginx, we generally use this signal to restart nginx, or reload the configuration, because it is calmly restart, so the service is not interrupted. master process after receiving a HUP signal is how to do it?
First, the master process after receiving the signal, will first reload the configuration file, and then start a new worker process, and sends a signal to all the old worker process, telling them honorable retirement.
After starting a new worker, he began receiving new requests, while older worker after receiving a signal from the master, it is no longer receiving new requests, and any pending completion of the request in the current process after the completion of treatment , then exit.
Of course, the signal is sent directly to the master process, which is relatively old mode of operation, nginx after version 0.8, introduces a series of command-line parameters to facilitate our management. For example,. / Nginx -s reload, is to restart nginx,. / Nginx -s stop, nginx is to stop the run. How to do it? We still take reload, we see that, when executing orders, we are starting a new process nginx, and nginx process after the new resolve to reload parameters, we know our purpose is to control nginx reload the configuration file, it sends a signal to the master process, and then the next action, and we will send signals directly to master the process of the same.
worker process:
The basic network event, it is on the worker process to deal with. Between multiple worker processes are equal, they are the same competition from the client's request, the processes are independent of each other between. A request can only be processed in a process worker, a worker process, other processes can not handle the request. The number of worker processes that can be set, usually we will set up a machine with the same number of cpu core, the reason there with a process model and event handling model nginx are inseparable.
equality between worker processes, each process, the opportunity to process the request is the same. When we offer 80 http service port, a connection request over each process are likely to deal with this connection, how to do it?
First of all master will be a listening socket port corresponding configuration file generated according to, and then a faster and more a worker process, so that each worker can accept messages coming from the socket (in fact, this should be every time a worker has a socket, but the socket is listening address is the same). In general, when a connection over, each worker can receive a notification, but only one process can accept this connection, others accept failure, which is the so-called shock group phenomenon, in order to deal with this phenomenon. nginx provides a accept_mutex this thing, from the name, we can see this is a plus on the accept a shared lock. Once you have the lock, the same time, it will only have a process connection accpet, so there will be thundering herd problem. accept_mutex is a controllable option, we can turn off the display, is enabled by default.
When a worker process after accept this connection, they begin to read the request, parse the request, processes the request, after generating the data, and then returned to the client, and finally disconnected, such a complete request is like this.
We can see a request handled entirely by the worker process, and deal only in a worker process.

Guess you like

Origin blog.51cto.com/14101466/2416689