Four for load balancing using Nginx

Introduction In a relational database, the index is a storage structure of a single, physical values from the database table has a column or columns are sorted, it is set in a table or a plurality of column values and the corresponding points logical page table pointer to a list of physical identification data of these values.

HTTP load balancing, that is, we usually all "seven load balancing" work in the seventh layer "application layer." The TCP load balancing, is what we usually call "four load balancing" work in "Network Layer" and the "transport layer." For example, LVS (Linux Virtual Server, Linux virtual services) and F5 (A hardware load balancing devices), also belong to the "four-layer load balancing"

nginx-1.9.0 released, this version adds modules for general stream of TCP proxy and load balancing, ngx_stream_core_module this module is enabled after version 1.90. But not installed by default,

This module is necessary to activate --with-stream parameters specified at compile time.

1) compile Nginx configuration file parameters

./configure --with-http_stub_status_module --with-stream

------------------------------------------------------------------

2) compile, install, make && make install

------------------------------------------------------------------

3) configuration file nginx.conf

stream {

upstream kevin {

server 192.168.10.10:8080; # here configured to be accessed address

server 192.168.10.20:8081;

server 192.168.10.30:8081; # required proxy port, where my agent eleven kevin module interface 8081

}

server {

listen 8081; # required listening port

proxy_timeout 20s;

proxy_pass kevin;

}

}

Creating the highest level of stream (with http same level), the definition of a group name for the upstream kevin, made of multiple services to achieve load balancing define a service that listens TCP connection (such as: port 8081),

Kevin agent to them and a group of the upstream, load balancing method and the configuration parameters for each Server; some configuration such as: the number of connections, weights and the like.

First, create a server group, used as TCP load balancing group. In the stream upstream block defines a context in which this block is added by the server command defined server, and the IP address specified his

Host name (host name can be resolved to a multi-address) and port number. The following example is the establishment of a group called kevin, two listening port server 1395, a server 8080 listening port.

upstream kevin {

server 192.168.10.10:8080; # here configured to be accessed address

server 192.168.10.20:8081;

server 192.168.10.30:8081; # required proxy port, where my agent eleven kevin module interface 8081

}

Of particular note are:

You can not define a protocol for each server, because the command to establish a TCP stream as a whole server protocol.

Nginx configuration enable reverse proxy TCP able to forward the request from a client to the load balancing group (eg: kevin group). Configuration information of each virtual server by the server and the server configuration in each block

Listening port defined in each server (proxy port number of the client's needs, such as a plug flow is I kevin protocol, the port number: 8081) proxy_passs command configuration information and transmits the communication to TCP

Which server upstream to go. We'll send TCP traffic to kevin set to go.

server {

listen 8081; # required listening port

proxy_timeout 20s;

proxy_pass kevin;

}

Of course, we can also use a single proxy mode:

server {

listen 8081; # required listening port

proxy_timeout 20s;

proxy_pass 192.168.10.30:8081; # required proxy port, where my agent eleven kevin module interface 8081

}

------------------------------------------------------------------

4) changes in load balancing methods:

The default nginx load balancing is performed by polling a communication algorithm. The request cycling guide disposed upstream of the port group server up. Because he is the default method, there is no polling command,

Simply create a configuration group here upstream mountain stream text, and add the server.

a) least-connected: For each request, nginx plus minimum number of connections currently selected server to process:

upstream kevin {

least_conn;

server 192.168.10.10:8080; # here configured to be accessed address

server 192.168.10.20:8081;

server 192.168.10.30:8081; # required proxy port, where my agent eleven kevin module interface 8081

}

b) least time: For each link, nginx pluns points selected by the server: the lowest average delay: calculated by containing the specified command parameters in least_time:

connect: connect to a server time spent

first_byte: receiving the first byte of the time

last_byte: All connections received over a minimum time of active:

upstream kevin {

least_time first_byte;

server 192.168.10.10:8080; # here configured to be accessed address

server 192.168.10.20:8081;

server 192.168.10.30:8081; # required proxy port, where my agent eleven kevin module interface 8081

}

c) ordinary hash algorithm: nginx plus select the server through user_defined keyword is the IP address: $ remote_addr;

upstream kevin {

hash $remote_addr consistent;

server 192.168.10.10:8080 weight = 5; # here to configure the address to be accessed

server 192.168.10.20:8081 max_fails=2 fail_timeout=30s;

server 192.168.10.30:8081 max_conns = 3; # required proxy port, where my agent eleven kevin module interface 8081

}

Nginx implementation of the principle of TCP load balancing

When Nginx port receives a new client link from the monitor, immediately perform routing scheduling algorithm, access to services required to specify IP connection, and then create a new upstream connection, connect to the specified server.

TCP load balancing support Nginx existing scheduling algorithms, including Round Robin (default, round-robin scheduling), hash (selected unanimously) and so on. Meanwhile, the scheduling information data is also robust detection module and cooperate together, select the appropriate connection for each target upstream server. If you use the Hash load balancing scheduling method, you can use the $ remote_addr (client IP) to achieve a simple persistent session (the same client IP connections, service always fall on the same server).

Like other upstream module, TCP stream module also supports the custom load-sharing and forwarding weight (Configuration "weight = 2"), and down as well as backup parameters for an upstream failure kicked off the server. max_conns a parameter can limit the number of server TCP connections, according to the capacity of a server to set the appropriate configuration values, especially at high concurrent scenario, can achieve the purpose of overload protection.

Nginx upstream client connections and to monitor the connection, upon receipt of the data, reads the Nginx and immediately pushed to the upstream connection, the data do not detected within the TCP connection. Nginx maintain a memory buffer for writing the client and the upstream data. If the client or server transfer a large amount of data, the buffer will be appropriate to increase the size of memory.

Nginx receive notification when the connection is closed either one, or TCP connection is idle for more than the time proxy_timeout configuration, the connection will be closed. For TCP long connection, we should select the appropriate time proxy_timeout the same time, attention so_keepalive parameter monitor socke prevent prematurely disconnected.

TCP load balancing module support built robust detection, an upstream server TCP connection is rejected if more than proxy_connect_timeout configuration, will be considered to have failed. In this case, Nginx immediately try another normal server connected upstream in the group. Connection failure information will be logged to the error log Nginx's.

If one server fail repeatedly (or exceeds the parameters max_fails fail_timeout configuration), Nginx will kick off this server. After the server kicked in 60 seconds, Nginx will occasionally try to reconnect it to detect whether it returned to normal. If the server is back to normal, Nginx will add it back into the upstream group, slowly increase the proportion of connection requests.

Of the "slow increase" because there is usually a service "hot data", that is to say, 80% or even more requests will be blocked in the actual "hot data cache", the request processing is performed only true a small part. When the machine has just started, "hot data cache" in fact has not been established, this time a large number of explosive forwards the request to come, it is probable that the machine can not "afford" and hung up again. As an example to mysql, mysql our queries, usually more than 95% of all falls in the cache memory, not much really to execute the query.

In fact, whether it is a single machine or a cluster, the scene at high concurrent requests, restart or switch, there is the risk, are two main ways to solve:

1) Request gradually increase from less to more, the gradual accumulation of hot data, finally reaches the normal service state.

2) prepared in advance "common" data service initiative to do "warm-up", after the warm-up is complete, then open access server.

TCP load balancing on principle and LVS, etc. are the same, more work in the bottom of the performance will be higher than a lot of the original HTTP load balancing. However, not even better than LVS, LVS kernel module is placed, while in user mode Nginx work, and, Nginx relatively heavy. Another point, it was a great pity, this module turned out to be a paid feature.

Original articles published 0 · won praise 0 · Views 531

Guess you like

Origin blog.csdn.net/qingdao666666/article/details/104767624