Nginx: upstream module

Configuration parsing stage

Here Insert Picture Description

init_main stage

When init_upstream, because there may be upstream of each of the plurality of IP server after the DNS, so init_upstream peer generated the set of all the analysis results, for example, a weight value server 10, parses the IP 5, then, will apply for five rrp, weight of each is 10.

Call kcf-> original_init_upstream (ngx_http_upstream_init_round_robin):
generating ngx_http_upstream_rr_peers_t structure, overall, based on all information backend server, generating a table, contains address information, the weight, the maximum number of connections, the maximum number of failures and the like.

Call uscf-> peer.init_upstream (ngx_http_upstream_init_keepalive):
initialize cache and free queue, wherein the cache storage established connection, global, free save connection release request max_cached nodes, hanging free queue.

Connection initialization phase

Call kcf-> original_init_peer (ngx_http_upstream_init_round_robin_peer):
Initialization ngx_http_upstream_rr_peer_data_t structure (r-> upstream-> peer.data, ultimately stored in kp-> data), private data request, the user saves all the available peers rear end, the rear end of the currently selected current , the rear end is connected through a bitmap, and get free initialization method.

typedef struct {
    ngx_uint_t                      config;
    ngx_http_upstream_rr_peers_t   *peers; //指向全局的rrps
    ngx_http_upstream_rr_peer_t    *current; //目前使用的rrp
    uintptr_t                      *tried; //位图指针,位数为rrp的个数,代表该连接是否尝试过这个rrp
    uintptr_t                       data; //位图
} ngx_http_upstream_rr_peer_data_t;

Call uscf-> peer.init (ngx_http_upstream_init_keepalive_peer)
initialization ngx_http_upstream_keepalive_peer_data_t structure (kp) (r-> upstream-> peer.data), private data request, initialization and get free method.

typedef struct {
    ngx_http_upstream_keepalive_srv_conf_t  *conf; //配置
    ngx_http_upstream_t               *upstream; //保存连接的upstream结构体
    void                              *data; //保存ngx_http_upstream_rr_peer_data_t
    ngx_event_get_peer_pt              original_get_peer;
    ngx_event_free_peer_pt             original_free_peer;
} ngx_http_upstream_keepalive_peer_data_t;

Acquisition phase connection

Call of KP-> original_get_peer (ngx_http_upstream_get_round_robin_peer):
(1) obtain the optimal back-end;
(2) when the back end get optimal failed attempt to get the back end from the backup.

struct ngx_http_upstream_rr_peer_s {
    ngx_str_t                       server; //解析前域名
    ngx_int_t                       current_weight; //该rrp当前权值
    ngx_int_t                       effective_weight; //该rrp当前有效权值
    ngx_int_t                       weight; //定权值
    ngx_uint_t                      conns; //该rrp目前连接数
    ngx_uint_t                      max_conns; //该rrp最大连接数
    ngx_uint_t                      fails; //“一段时间”内失败次数
    time_t                          accessed; //最近一次失败时间
    time_t                          checked;  //用于检查是否超过了“一段时间”,上一次判断时间
    ngx_uint_t                      max_fails; //最大失败次数
    time_t                          fail_timeout; //“一段时间”
    ngx_uint_t                      down; //该server不参与负载均衡
};

round_robin load balancing algorithms: A, B, C three rrp, weight 1,2,3, respectively, then the initial effective_weight is 1,2,3, current_weight initially 0, when after a load balancing algorithm, current_weight respectively l, 2,3, C current_weight maximum of 3, then C is selected best, and the C current_weight
= 3 - (. 1 + 2 + 3) = -3, then the subsequent load balancing, even to each C plus current_weight = 3, which is the maximum current_weight not in all rrp. Is understood to be a sorted queue current_weight, are each selected best take the head of the queue, while decreasing current_weight and reordered. When a connection failure rrp rear end, which effective_weight will be reduced.
When fails greater than max_fails, the rrp unavailable until the unavailable time duration exceeds fail_timeout. In addition, each worker has its own rrps memory space that does not use shared memory, so similar current_weight, fails data is not the same for each worker, and the number of keepalive specified for the number of each worker.

Call R-> upstream-> peer.get (ngx_http_upstream_get_keepalive_peer):
(1) Find the optimal cache queue if there is back-end connection has been established.

Connection release phase

Call R-> upstream-> peer.free (ngx_http_upstream_free_keepalive_peer):
(. 1) if the free queue, then acquires the tail node from the cache is empty, for storing the connection, the original node cached connection will be released. Note that this is not the number of nodes connected keepalive upper limit of the upstream support, but has been established, the upstream connection cache unused number.
(2) If the free queue is not empty. Get free from the queue node.

Call KP-> original_free_peer (ngx_http_upstream_free_round_robin_peer):
(. 1), if the backhaul connection fails peer, the peer modify some parameters, comprising fails Total number of failures, the effective weight effective_weight like;
(2) reduction in the number of connected peer of conns .

Guess you like

Origin blog.csdn.net/u013032097/article/details/91389801