Nginx architecture and basic principles

Nginx application scenarios

Nginx the three main scenarios:

  • Static Resource Service
  • Reverse Proxy Service
  • API Service

Static Resource Service

Nginx can serve static resources by the local file system, such as pure static HTML pages and so on.

Reverse Proxy Service

Many service applications operating efficiency is very low, QPS, TPS, concurrency is limited, it is necessary to put a lot of applications and services make up a cluster to provide high availability of services to users, this time need to Nginx reverse proxy functionality , and the dynamic expansion of application services require load balancing, another, Nginx buffer layer needs to be done. Therefore reverse proxy services are primarily three functions:

  • Reverse Proxy
  • Load Balancing
  • Cache

API Service

Sometimes the application service itself has a lot of performance problems, but much better than the service database application services, business scenario is relatively simple, concurrency and TPS to be much higher than apps, so this time can access the database directly to or from the Redis Nginx It can also be achieved API service application firewall with powerful concurrency of Nginx.

Nginx Architecture Foundation

Nginx state machine

When Nginx external services, there are three traffic will reach Nginx: WEB, EMAIL, TCP traffic. After three flow reaches the Nginx, respectively, will be handled by the transport layer state machine, the application layer state machine, MAIL state machine. When the memory is not sufficient to cache all static resources, will degenerate disk to block calls, you need a thread pool to handle at this time, for each request processed records access and error logs, the log is recorded to disk.

Nginx process structure

Nginx There are four processes:

  • Master process. Master process is the parent process, other processes are child processes, Master of worker processes to manage the process
  • worker process. There are multiple worker processes, it is responsible for handling specific requests. Nginx Why multi-process multi-threaded process rather than structure? Because Nginx to ensure high availability, will share an address space between multiple threads, when certain third-party modules led to a segmentation fault, it will cause the whole process to hang Nginx. The multi-process model does not have this problem
  • cache manager and cache loader process. In addition to the cache to be used by multiple worker processes, but also by using the process of cache, cache loader as cache load, cache management manager do caching, the cache is actually used for each request or to the worker process. Communication between these processes are carried out through shared memory

Why worker process needs a lot?

This is because, Nginx uses an event-driven model, it expects the worker process from start to finish can fill a CPU, so you can more efficiently use whole pieces of CPU, improve the cache hit rate of the CPU, the worker process can also together with one CPU core binding.

Use of parent and child signal management Nginx

Speaking of Nginx in front of the command line, in fact, many Nginx signal is send a signal to the master process through to achieve.

Master process

master process monitors the worker process, and monitor the realization of the need to send a signal to the parent process CHLD when the child process specified by the Linux exit. This bug occurs when, immediately pull worker.

master process can receive the following signals:

  • TERM, INT
  • QUIT
  • HUP
  • USR1
  • USR2
  • WINCH

Worker process

worker process can receive the following signals:

  • TERM, INT
  • QUIT
  • HUP
  • USR1
  • WINCH

A signal corresponding to the command line

  • reload:HUP
  • reopen:USR1
  • stop:TERM
  • quit:QUIT

WINCH USR2 and no corresponding signal is transmitted through only kill.

The difference between stop and quit is, one is immediately exit, is a graceful stop.

The truth reload reload configuration file

  1. Send a HUP signal to the master process
  2. The process of checking whether the master configuration file syntax problem
  3. Open a new master process monitor port (if a new port configuration)
  4. master process to start using the new configuration file worker process
  5. QUIT master process sends a signal to the old worker process
  6. Old worker process shutdown monitor handle, finished with the process after the end of the current connection

The truth of hot deployment

In the previous article, about the flow of hot deployment, the deployment of specific process heat is kind of how it?

  1. Will replace the old file into the new Nginx Nginx file (note that the backup)
  2. USR2 send a signal to the master process
  3. modify the master process pid file name suffix .oldbin
  4. master process to start new processes with a new master file Nginx
  5. Send quit signal to the old master process, shut down the old master
  6. Rollback, send HUP signal to the old master process, the new master send QUIT

The elegant turn off worker process

  1. Set the timer worker_shutdown_timeout
  2. Close listeners handle
  3. Close idle connections
  4. Waiting for all connections closed recirculation
  5. Quit Process

There, the role of the timer, if the timeout, but the connection has not yet processed it will be forced to withdraw from the process. In addition, Nginx can only handle HTTP elegant closed, websocket, TCP, UDP proxy can not do, worker does not parse the data.

These contents, the process is complete Nginx command line and signals. Next time began to speak HTTP module.

No. receive public attention Nginx knowledge map

No. receive public attention Nginx knowledge map

Guess you like

Origin www.cnblogs.com/iziyang/p/12596407.html