[] Open source components in simple terms Nginx

Author: Zhang Fengzhe

Link: https: //www.jianshu.com/p/5eab0f83e3b4

Transfer: Agricultural Ethics Code

Foreword

Nginx is a lightweight Web server, reverse proxy server, due to its small memory footprint (a worker process only takes 10-12M memory), fast start, strong high concurrency capability, it is widely used in Internet projects.

The figure illustrates the substantially current popular technology architecture, wherein the inlet Nginx little taste gateway.

Reverse Proxy

Often hear people say that some of the terms, such as reverse proxy, then what is a reverse proxy, forward proxy What is it? Here is a brief summary, a more detailed explanation, please read the previous article in Forward Proxy and Reverse Proxy (please poke me).

Forward Proxy

Due to the firewall, we can not directly access Google, then we can use a VPN to achieve, this is a simple example of a forward proxy. Here you can find, forward proxy "agent" is the client, and the client is aware of the target, and the target is not known client is accessed through the VPN.

Forward proxy schematic

Reverse Proxy

When we are outside the network to access Baidu, in fact, will be a forwarding agent within the network to go, this is called a reverse proxy, ie reverse proxy "proxy" server-side, and this is a process for the client in terms of It is transparent.

Reverse Proxy schematic

The Master-Worker mode Nginx

To start nginx, just enter life nginx, nginx where xxx is your installation directory.

nginx process

After starting Nginx, in fact, launched on port 80 Socket services to monitor, as shown, Nginx process involving Master and Worker process.

Master-Worker mode

The role of Master of the process: read and verify the configuration file nginx.conf; worker management process;

Worker role of the process: Every Worker process maintains a thread (to avoid the thread switch), and handle the connection request; note Worker process is determined by the number of configuration files, and generally related to the number of CPU (in favor of the process of switching), a few configuration there are a few Worker process, the above example is only one Worker process.

Thinking 1: How do Nginx hot deployment?

The so-called hot deployment, that is, after nginx.conf modify the configuration file, does not need to stop Nginx, do not need to interrupt request, you can make the configuration file to take effect! (Nginx -s reload to reload / nginx -t check the configuration / nginx -s stop)

We already know the worker through the above process is responsible for dealing with specific requests, so if you want to achieve the effect of hot deployment, it is conceivable:

Option One: After modifying the configuration file nginx.conf, the main process responsible for the master pushed to the worker process to update the configuration information, the worker process receives information, thread information updating internal processes.

Option II: the modified configuration file nginx.conf, regenerate a new worker process, of course, will be processed in the new configuration, and a new request must be handed over to the new worker process, as the old worker process, such as those previously request is processed, kill off the can.

Nginx is used to achieve the program two hot deployment!

Reflection 2: Nginx how to do effective treatment under high concurrency?

Nginx already mentioned above, the number of worker processes and CPU binding, internal worker process contains a thread loop efficiently process the request, it does contribute to efficiency, but this is not enough.

As a professional programmer, we can look at the brain open hole: BIO / NIO / AIO, asynchronous / synchronous, blocking / non-blocking ...

Juggling so many requests, you know, some request needs to happen IO, may take a long time, if waiting for it, it will slow down the processing speed of the worker.

Nginx uses a model of Linux epoll, epoll model based on event-driven mechanism, it can monitor multiple events are ready, if OK, then put epoll queue, the process is asynchronous. Just worker from the epoll queue processing cycle can be.

Thinking 3: Nginx hung up how to do?

Since Nginx as the entrance gateway, it is very important, if the single-point problem, is clearly unacceptable. The answer is: Keepalived + Nginx achieve high availability.

Keepalived is a high-availability solutions, mainly to prevent the server is a single point of failure, high availability can be achieved through Web services and Nginx fit. (In fact, Keepalived and Nginx can not only fit, but also with many other services)

Keepalived + Nginx achieve high availability ideas:

First: Do not request direct hit on Nginx, should first pass Keepalived (this is called virtual IP, VIP)

Second: Keepalived should be able to monitor the state of life of Nginx (a user-defined script, regular inspection process Nginx state, in order to achieve weight change ,, Nginx failover)

Keepalived+Nginx

Our main battlefield: nginx.conf

In many cases, development, test environment, we have to own to configure Nginx, it is to configure nginx.conf. nginx.conf is the typical profile of the segment, here we have to analyze. Inside the Nginx, you can specify a plurality of virtual servers, each virtual server with a server {} context description.

Web Hosting

nginx configuration file is mainly composed of an instruction, the instruction including names and parameters, semicolon; end. The following is to configure a virtual server: listen directive to specify the virtual host listening on a given IP port combination; server_name instruction detection Host header to determine the request in the end match to which virtual host ... nginx configuration items lot, and specific access to online information .

http server segment of

Access results

In fact, this is the Nginx as a web server to handle static resources,

1: location can be a regular match, it should be noted that several forms of regular and priority. (Here not to proceed)

2: the speed can be increased Nginx wherein a feature is: static and dynamic separation, is placed on the static resource Nginx, forwarded by Nginx management, dynamic request to the backend.

3: We can put in the Nginx static resource, log files belong to a different domain name (that is, directory), so easy to manage and maintain.

4: Nginx IP access can be controlled, some electronic business platform, it is possible in this layer Nginx, do some processing, built a blacklist module, then you do not have to reach back-end Nginx and other requests by performing interception, but directly in Nginx this layer will be disposed of.

Reverse Proxy -proxy_pass

The so-called reverse proxy is very simple, in fact, is the root location in the configuration in this section can be replaced proxy_pass. root note is static resource may be returned by the Nginx; proxy_pass the dynamic requests is described, need to be forwarded to the agent such as Tomcat.

Reverse proxy, has been said above, the process is transparent, for example, request -> Nginx -> Tomcat, then for Tomcat, IP address request is Nginx address, rather than address the real request, it should be noted . Fortunately, however, can not Nginx reverse proxy request, you may also be set by the user to customize HTTP HEADER.

Load balancing -upstream

The above reverse proxy, we proxy_pass address specified by the Tomcat, it is clear that we can only specify one Tomcat address, so if we want to specify multiple load balancing to achieve it?

1: is defined by a set of upstream Tomcat, and the load policy (IPHASH, argument weighted least connections), the health policy check (the Nginx Tomcat can monitor the status of this group) and the like.

2: The upstream proxy_pass to replace the value specified.

Load balancing issues to note: choose different load balancing algorithms, may bring different problems, if you select polling, then a request can go to the A server, you can also go to the B server, we have to pay attention to save the user's state issues such as session session information is not saved to the server on. If you choose hash, without the above problems, but we have to consider what kind of hashing algorithm evenly hit on the back-end server as far as possible, in short, the practical application needs to weigh the choice depending on the scene.

Published 100 original articles · won praise 12 · views 10000 +

Guess you like

Origin blog.csdn.net/hmh13548571896/article/details/104098812