Nginx of load balancing configuration (a)

  We talked earlier under nginx as a reverse proxy server, configure proxy dynamic back-end application server, review Refer https://www.cnblogs.com/qiuhom-1874/p/12430543.html ; today we chat nginx as a reverse proxy server, the proxy server a set of configuration (load balancing); we just comes to the front nginx backend server proxy how to respond to the request of the client, which in response to the client request process is such that a user request to nginx proxy server, nginx server at this time is to play the role of the server, the client is unable to sense the presence of the back-end server and the user's message is received nginx, nginx proxy server it will split the user's request message open look, the resource requested by the user, then the user's request for resources, get their location in the match, if the match is up, it is a proxy in accordance with the proxy rules matched to the location defined, before nginx will first look at yourself cache whether there is a resource requested by the user, if there is, it is in response to the user from the cache, and if not, To play the role of the client, Reseal user requests re-request message, sent to the back-end servers, back-end server after receiving the request, the corresponding resource in response to nginx proxy server, nginx will back-end server resources in response to the first a cache (if the cache is enabled), then the encapsulated response packet, in response to a client; from such a process point of view, when the server nginx both roles, and when the role of the client, and the user can nginx packets apart, then encapsulated; this is a process in response to client requests nginx proxy agent as a backend server (a case where the back-end server); if there is a plurality of back-end server, is provided by the same service, At this time, how can we put the client's request to the back-end proxy multiple servers it?

  First, we understand nginx under upstream module, nginx's upstream module has two, one is based on the upstream http protocol, which is mainly based on http protocol defines the back-end server group, and the other is based on upstream tcp protocol, which is mainly tcp protocol is based on the definition of the back-end server group; we start with the nginx http in the upstream module it! ! !

  A, ngx_http_upstream_module: This module is used to define a group of servers, then the referenced proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass memcached_pass instructions and the server group. Simply means that the same merge multiple servers into a set of servers, and various agents nginx based protocol, the agent requests to the set, enabling the user agent to request multiple servers on the backend

  1, upstream name {......}: This instruction can only be used in http configuration section, means defining a set of back-end server group;

  2, server address [parameters]: This command is used to configure the upstream section, showing the members defined in the upstream server configuration section, and the associated parameters; wherein the address formats supported in IP address plus port, support unix path path, also support for a host name or a domain name plus port; parameters represent parameters, commonly used parameters weight = number weights, the default is 1, max_fails = number indicates a failure maximum number of attempts; beyond the specified number of times here, nginx will fail server marked unavailable; fail_timeout = time represented by setting a server flag timeout in the unavailable state; MAX_CONNS represents the maximum number of concurrent connections a current server; Backup shows a server flag is "standby", both all the servers are not available, the server only It will be enabled, somewhat similar to the role of LVS in a sorry server; down represents the server marked as "not available"

  3, least_conn: This command represents the minimum connection scheduling algorithm, the algorithm indicates wlc server when heavy weights have different

  4, ip_hash: This instruction is similar lvs in the sh algorithm (source address hash algorithm), the same client address is always scheduled on the same server;

  5, hash key [consistent]: represents a hash key based on the designated achieve scheduling request, where the key can be text, variable, or a combination of both; role is to classify request, the same type of requests to the same upstream of a server in response;

  6, keepalive connections: the number of long idle connections reserved for each worker process;

  Example:

  Define a set of server named webserver

    Tip: upstream only be used to define the segment in the http configuration, which represents the definition of a set of servers, called the webserver, subsequently scheduled to direct user requests to the agent group;

    Note: The above configuration indicates when a user requests access www.proxy.com user agent to a server on the webserver of this group, the default polling;

   Note: The client can see the request is the request to the proxy server through a set of back-end Nginx, the response from the above results, we do not configure any weight, which is the default polling (of course, the above results have repeated and this is not yet clear why repeat, perhaps in response to the speed of each message is not the same now, but the general response is that each back-end server like half and half); of course we can also add to a different server the different weights, a weighted round robin scheduler this case nginx is used, as follows

    Tip: 192.168.0.20 is the weight of the above configuration of this server is 5,0.22 heavy weight is 2, meaning that seven requests, requests for 5 0.20, 0.22 2 request process;

   If we have a back-end server service fails, nginx will dispatch the user's request to the server fails it? We know when to do dispatcher lvs, lvs front-end user's request will appear on the schedule to the failed server, we need the help of keepalived or other ancillary services to achieve health monitoring to do the back-end server to the user's request is not scheduled to the back-end server faulty, nginx will not it?

  Tip: You can see the nginx does not schedule the user's request to the server's fault, because there do nginx own health condition monitoring mechanism on the back-end server, the timely discovery of the health status of the back-end server in a timely manner the backend host service unavailable offline from the cluster, of course, this is offline when service is not available, automatically trigger actions, we can put the back-end server artificially marked as unavailable state, which normally gray It may be used when the degree of release, directly down to mark clearly with the server behind the server does not accept any request;

  Note: The above configuration indicates that the host offline from 0.22 webserver group offline means that no scheduling request up; the group of course at this time only 0.20 server that hosts the user requests only scheduling to this stage, so no matter how the user request, nginx will only request is scheduled on 0.20 this back-end host;

  If the latter two hosts are down, and this time the user visited our website lvs will not be like that, sorry server have to say sorry to the users?

  Tip: You can see that when all the back-end host is down, there are no lvs out like a sorry server to the user, said sorry or in response to client requests; sorry server configuration nginx where there is very simple and requires only server back to tag to the backup

  Note: The above configuration represents the 127.0.0.1:80 as a sorry server, meaning that after the group's normal to be down all the agents of the host, this host will be scheduled when the group has a normal host, this host is not scheduled, the scheduled user requests that a normal host;

  Tip: You can see that when all the back-end host is down, sorry server will be scheduled;

  Tip: When the back-end host recovery, sorry server will not be scheduled, the user's request will be proxied on that host to recover;

  These are the common nginx as a load balancing configuration, then we talk about the scheduling algorithm

  Source address hash algorithm

  Note: The above configuration represents the same source address of the client request will be scheduled on the rear end of a server station to respond

  Note: customers can see the same section always scheduled to respond on a server, which is called the source address binding; In addition to the above ip_hash; to specify the source address binding, can be specified by hash key, the above configuration equivalent hash $ remote_addr;

  Binding request based on the user's uri, uri always the same user requests the dispatcher to respond to a server, the benefits of doing so can improve the cache hit;

  Note: The above configuration representation binding rui user request, different users requesting the same rui, nginx rui will always request the same schedule in response to the same Server;

  Tip: You can see the different client requests uri same time, according to uri random scheduling request to the server to respond to a table; configuration from the above examples we can know what we have as a key to hash, you can achieve What back-end servers based on the binding, such a request based on the user's uri as a hash object, the user requests the same uri will be scheduled on the same server responds as if the hash object based on user source ip address, the same source user IP address, no matter what the request uri will be dispatched to the same server responds; according to this logic we can bind the user to do the scheduling information;

  These are the common nginx load balancing configuration as seven proxy http requests; Concluded that use nginx as a load balancing among the core idea is the same server services to merge into one group, then based on different protocols agent to the user anti-generation request to the group, and a scheduling algorithm to be implemented based on a user request scheduling server responds to a station;

Guess you like

Origin www.cnblogs.com/qiuhom-1874/p/12458159.html