Nginx single point cluster Details

Introduction Nginx

Nginx is a free, open-source, high-performance HTTP server and reverse proxy server; also a IMAP, POP3, SMTP proxy server; Nginx can publish treatment site as an HTTP server, in addition Nginx as reverse proxy for load balancing.

  1. Nginx based event-driven architecture, so that it can support millions-level TCP connection
  2. Highly modular and free software licenses enable a third party module after another (open source)
  3. Nginx is a cross-platform server, the operating system can run on Linux, Windows, FreeBSD, Solaris, AIX, Mac OS, etc.
  4. High stability

Introduction Agent:

Speaking of agents, we need a clear concept of the so-called proxy is a representative of a channel;
this time it involves two roles, one is acting role (A_) , a target role (B_) , A_ by proxy access B_ complete some tasks is called a proxy (C_) during operation; such guests to buy shoes, the shop is (C_) , (A_) is the manufacturer, (B_) is the user.

Acting it is divided into forward proxy and reverse proxy .

Forward proxy:

First, let's give an example: for example, we find learning materials, you will find it slow on YouTob; this time it will need to forward proxy, the most obvious example ( 1 FQ), mainly FQ way to find a data access to foreign sites proxy server, we will send the request to the proxy server, proxy server to access foreign websites, and will give us access to the pass

This is the forward proxy, forward proxy server address the biggest feature is very clear to the client access; server only clear from the request which proxy server, but do not know which particular client from; forward proxy mode screen or hidden real client information.

In summary: forward proxy, "which is the agent of the client, on behalf of the client makes a request," it is a server located between the client and the origin server (origin server), in order to obtain content from the origin server to the client send a request and targeting (origin server), and then transmit the request to the proxy server and return the original content available to the client. The client must make some special settings to use the forward proxy.

Forward Proxy use :

  1. The original resource access inaccessible, such as Google
  2. You can do caching, to accelerate access to resources
  3. For client access licensing, Internet authentication
  4. Proxy can record user access records (access management), external users to hide information

Reverse Proxy

Example: a treasure our website every day at the same time the number of visitors to the site has been connected burst table, a single server can not meet the user visits, this time there have been distributed deployment; that is solved by deploying multiple servers access limit the number of questions; a treasure site most of the functions are also directly Nginx reverse proxy to achieve, and then packaged by Nginx and other components of a name on tall: Tengine, interested in children's shoes can be accessed Tengine official website View specific information http://tengine.taobao.org/.

A plurality of client requests to the server to send, after receiving the Nginx server, according to certain rules distributed to the rear end of the service processing server for processing. At this point - that is the request source client is clear, but the request was not clear which server a specific treatment, Nginx play is a reverse proxy role.

The client is not aware of the presence of the agent, the external reverse proxy is transparent, accessible to visitors do not know they are a proxy. Because the client does not need any configuration can be accessed.
Reverse Proxy, "which is the agent of the server, the server receives the request on behalf of", mainly used in the case of a distributed server cluster deployment, reverse proxy server to hide the information.

Reverse proxy role:

  1. Ensure security within the network, usually as a reverse proxy to access the public network address, Web servers within the network
  2. Load balancing, to optimize the loading site through reverse proxy server

Load Balancing

We have defined the concept of so-called proxy server, then the next, Nginx played the role of a reverse proxy server, which is based on a request basis for the distribution of what kind of rules? Do not project scenarios, whether the rules can control the distribution of it?

, The number of requests Nginx reverse proxy server received, what we say here mentioned load sent by the client.

The number of requests are distributed to different rules server to handle in a balanced rules according to certain rules.

Therefore, the server processes the received request in accordance with the distribution rule, referred to as load balancing.

Load balancing in the actual operation of the project, hardware load balancers and load balancing software are two, hardware load balancers are also referred to as hard a load, such as load balancing F5, relatively high cost of high cost, but the stability data security, etc. there are very good protection, such as China mobile, China Unicom such a company would choose a hard load operation; more companies to take into account cost reasons, choose to use software load balancing, load balancing software is to use existing technology combined with the host hardware a messaging realize queue distribution mechanism.


Nginx supports load balancing scheduling algorithm is as follows:

  1. weight polling (default, common): received request in accordance with weights assigned to different backend servers, even during use, a back-end server is down, Nginx automatically removed out of the server queue, request receiving the case will not be affected. In this manner, a different set of back-end servers to a weight value (weight), the distribution ratio for the different adjustment request server; the greater the weight data, the greater the chance is assigned to the request; the weighting value, mainly configured for the actual working environment in different back-end server hardware adjustments.
  2. ip_hash (common): Each request for matching in accordance with the hash result originating ip client, this algorithm a client is always fixed ip address access to the same back-end server, which has to a certain extent, solve the cluster deployment question session shared environment.
  3. fair: intelligently scheduling algorithm, dynamic response time of the processing to be allocated according to the request equilibrium back-end server, in response to short time high probability of high efficiency server assigned to the request, the response time is long low processing efficiency of the server assigned to less request; combines the advantages of a scheduling algorithm of the first two. However, note that the default does not support fair Nginx algorithm, if you want to use this scheduling algorithm, install upstream_fair module.
  4. url_hash: According to hash url to access the results of the allocation request, each request url will point to a server back-end fixed, can improve cache efficiency as in the case of a static Nginx server. Also note that Nginx default scheduling algorithm does not support this, you want to use, then you need to install Nginx of hash packages.

Nginx Original Reference Source: Rose Nina


  1. Over the wall ↩︎

Published 37 original articles · won praise 66 · views 5824

Guess you like

Origin blog.csdn.net/weixin_44685869/article/details/104359476
Recommended