Three, Nginx analytical principle

Nginx principle of analytic

A reverse proxy

work process

  1. A user makes a request to the Web server through the domain name, the domain name is resolved to an IP address of the DNS server reverse proxy server;
  2. Reverse proxy server accepts the user's request;
  3. Reverse proxy server to find the requested content in the local cache, to send the content to the user after finding;
  4. If there are no local cache content requested by the user, the reverse proxy server instead of the user requests the same information content to the source server, and the information content to users, if content is cached it will be saved to the cache .

advantage

  1. Protect the real web server to ensure the security of the web server resources

The proxy server is usually only used for internal agency network connection to the Internet external network request, the client must specify the proxy server, and would have to be sent directly to http requests on the Web server sends to the proxy server. It does not support the external network connection request to the internal network, the internal network because the external network is not visible. When a host on a proxy server to proxy external network to access the internal network, this proxy service is called reverse proxy services. At this point the external proxy server on the performance of a Web server, the external network can simply treat it as a standard Web server without the need for specific configuration. The difference is that the server does not save any page of the real data, all the static Web page or CGI programs are stored on an internal Web server. Therefore, attacks on the reverse proxy server and does not cause the destruction of pages of information, thus enhancing the security of the Web server.

  1. Saving the limited resources of IP addresses

All sites within a company registered in the IP address of the internet, the server assigns private addresses, virtual hosts way to provide services.

  1. WEB server reduced pressure, to improve the response speed

Reverse proxy web server is commonly referred to acceleration, which is a high speed by increasing a web server between a buffer busy web server and the external network a technique to reduce the actual load of the web server. A reverse proxy is to improve acceleration for web server as a proxy cache, it is not for browser users, but for one or more particular web server, it can request proxy access the external network to the internal network.

Reverse proxy server will force the external network access to the proxy server through it, so the reverse proxy server is responsible for receiving client requests, and then to get content on the source server, the content back to the user, and save the contents to a local so that at a later date when you receive the same information request, it will locally cache content directly to users, in order to reduce the pressure of the back-end web servers, improve the response speed. So Nginx also has a caching feature.

Two, Nginx works

Nginx by the kernel and the module.

  Nginx itself do little actual work, when it received an HTTP request, it is only by looking up the configuration file to map the request to a location block, and each instruction in this location will be configured in different modules start to complete the work, so the module can be seen as Nginx real labor workers. In a location generally involves a command handler module and a plurality of filter modules (of course, a plurality of location can be shared with the same module). handler module is responsible for handling requests, generated in response to the completion of the content, and content filter module for processing the response.

Users according to their own needs to develop a module belong to third-party modules. It is precisely because so much support module, Nginx functions will be so powerful.

Nginx modules are divided into core modules, base modules and third-party modules from the structure:

  • Core modules: HTTP module, EVENT modules and module MAIL
  • The base module: HTTP Access module, HTTP FastCGI module, HTTP Proxy module and HTTP Rewrite module,
  • Third-party modules: HTTP Upstream Request Hash module, Notice modules and HTTP Access Key modules.

The module is divided into the following three categories Nginx from the function:

  • The Handlers (processor modules). Such modules directly process the request, and outputs the content information and the like headers and modification operations. Handlers processor modules typically have only one.
  • The Filters (filter block). Such other content module of the processor module output modification operations, the final output from the Nginx.
  • The Proxies (proxy class module). These modules are the Nginx HTTP Upstream like modules, some of the main and back-end services such as FastCGI, etc. interact to achieve service proxy and load balancing capabilities.

Three, Nginx process model

Nginx default multi-process work, after Nginx starts, it will run a master process and multiple worker processes. Which acts as a master throughout the process group interaction with the user interface, while on the guardianship process, manage worker processes to achieve restart the service, smooth upgrade, replace the log files, configuration files and other functions with immediate effect. worker to handle basic networking events, are equal between the worker, their mutual competition to handle requests from clients.

nginx process model as shown:

img

When you create a master process, first establish a need to listen to the socket (listenfd), then from the master process fork () the multiple worker processes, each worker process this way more than requested by the user can listen socket. In general, when a connection comes in, all the Worker will be notified, but only one process can accept the connection request, others have failed, this is the so-called shock group phenomenon. nginx provides a accept_mutex (mutex), then with the lock, the same time, it will only have a process connection accpet, so there will be thundering herd problem.

First open accept_mutex option, and only gained accept_mutex process will accept to add an event. nginx using a variable called ngx_accept_disabled to control whether to compete accept_mutex lock. The number of opportunities for idle connections, when the ngx_accept_disabled greater than zero, will not try to acquire accept_mutex lock, ngx_accept_disable greater, so the more opportunities to get out of, so that other processes to obtain the lock - all the total number of connections ngx_accept_disabled = nginx single-process / 8 greater. Do not accept, the number of connections per worker process controls down, connection pool will be utilized in other processes, so that, Nginx controls the balance between the multi-process connections.

Each worker process has a separate connection pool, connection pool size is worker_connections. Save the connection pool here which is actually not true connection, it's just an array structure of a ngx_connection_t size worker_connections. And, Nginx a list may be preserved by all the free free_connections ngx_connection_t, each acquiring a connection, to obtain a connection from the free linked list is exhausted, then put back inside the idle connection list. Nginx a maximum number of connections can be established, it should be worker_connections * worker_processes. Of course, here that the maximum number of connections, HTTP requests for local resources, the maximum number of concurrent support is worker_connections * worker_processes, and if it is as a reverse proxy for HTTP, should be the maximum number of concurrent worker_connections * worker_processes / 2 . Because as a reverse proxy server, each concurrent connection is established and the connection with the client and back-end services, will take up two connections.

Guess you like

Origin www.cnblogs.com/lee0527/p/12202852.html