nginx load balancer seven

[tcp]

nginx load balancer seven

nginx load balancing Overview

When our Web server directly user-oriented, often carrying a large number of concurrent requests, a single server is difficult to load, I use multiple Web servers cluster, front-end Nginxload balancing, request dispersed hit our back-end server cluster, to achieve load distribution. It will greatly enhance the throughput of the system, request performance, disaster recovery

When the mass so that the user requests over, it is also a scheduling request node, scheduling node forwards the user request to the service node corresponding to the rear end, the serving node forwards the request has been processed to the scheduling node, the last node in response to a user scheduling node . This can also achieve a balanced role, then Nginx is a typicalSLB

There are a lot of load balancing is called:

Load balancing
load
the Load Balance
LB

Public cloud is called

SLB Ali cloud load balancing
QLB Wun Load Balancing
CLB Tencent cloud load balancing
ULB ucloud load balancing

Common load balancing software

Nginx
Haproxy
LVS

Four load balancing

The so-called four-layer load balancing refers to the OSItransport layer of the seven-layer model, the transport layer Nginx has been able to support TCP / IP control, it is only necessary to carry out the client's request TCP / IP packet forwarding protocol can achieve load balancing, then its performance advantage is very fast, only the underlying application processing, without the need for complex logic.

Seven load balancing

Seven load balancing it is in the application layer, it can do a lot of application protocol request, such as we say http load balancing applications, it can be achieved http rewrite information, rewritten header information, safety rules control application, the URL of matching rule control, and forward, rewrite , and so the rules, so the service application layer inside, we can do even more content, then Nginx is a typical seven-layer load balancingSLB

the difference

Load balancing four packets were distributed on the bottom, while the load balancing seven packets are distributed at the top level, it can be seen, seven four load balancing load balancing efficiency is not high.

But seven load balancing closer to services, such as: HTTP protocol is seven agreements, we can use Nginx can do to keep the session, the URL of path rules match, head first rewrite and so on, these are the four load balancing can not be achieved.

Note: The four load balancing does not recognize the domain name, the domain name identifies seven load balancing

nginx load balancing configuration scenarios

Nginx to achieve load balancing the need to use proxy_passthe proxy module configuration.

Nginx load balancing and Nginx proxy different places that, Nginx one of locationonly a proxy server, Nginx load balancing sucked forward proxy client requests to a group of upstream virtual service pool.

Preparing the environment

Character External network IP (NAT) Network IP (LAN) CPU name
LB01 eth0: 10.0.0.5 eth1:172.16.1.5 lb01
web01 eth0: 10.0.0.7 eth1:172.16.1.7 web01
web02 eth0: 10.0.0.8 eth1:172.16.1.8 web02

Nginx configuration on the web01

[root@web01 ~]# cd /etc/nginx/conf.d/
[root@web01 conf.d]# cat node.conf 
server {
    listen 80;
    server_name node.drz.com;
    location / {
        root /node;
        index index.html;
    }
}
[root@web01 conf.d]# mkdir /node
[root@web01 conf.d]# echo "Web01..." > /node/index.html
[root@web01 conf.d]# systemctl restart nginx

Nginx configuration on the web02

[root@web02 ~]# cd /etc/nginx/conf.d/
[root@web02 conf.d]# cat node.conf 
server {
    listen 80;
    server_name node.drz.com;
    location / {
        root /node;
        index index.html;
    }
}
[root@web02 conf.d]# mkdir /node
[root@web02 conf.d]# echo "Web02..." > /node/index.html
[root@web02 conf.d]# systemctl restart nginx

Configuring nginx load balancing across lb01

[root@lb01 ~]# cd /etc/nginx/conf.d/
[root@lb01 conf.d]# cat node_proxy.conf 
upstream node {
    server 172.16.1.7:80;
    server 172.16.1.8:80;
}
server {
    listen 80;
    server_name node.drz.com;
 
    location / {
        proxy_pass http://node;
        include proxy_params;
    }
}
[root@lb01 conf.d]# systemctl restart nginx

proxy_params ready to use Nginx load balancing scheduling

[root@Nginx ~]# vim /etc/nginx/proxy_params
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 
proxy_connect_timeout 30;
proxy_send_timeout 60;
proxy_read_timeout 60;
 
proxy_buffering on;
proxy_buffer_size 32k;
proxy_buffers 4 128k;

Open a browser test: true results refresh web01 / web02 switch

Solution: load balancing process, when the rear end of a machine's service is down, without affecting the user

proxy_next_upstream error timeout http_500 http_502 http_503 http_504;

vim upstream.conf 
upstream  node {
        server 172.16.1.7;
        server 172.16.1.8;
}

server {
        listen 80;
        server_name blog.drz.com;

    location / {
            proxy_pass http://node;
            proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
            include proxy_params;
    }

}

nginx load balancing scheduling algorithm

Scheduling Algorithm Outline
Polling (rr) Individually assigned to different backend server (default) chronologically
weight(wrr) WRR higher, the larger the weight value assigned to the access probability
ip_hash Each hash results by requesting allocation of IP access, access from the same IP thus fixed to a back-end server
url_hash Hash result in accordance with the allocation request to access the URL, each URL is directed to the same backend server
least_conn Minimum number of links, the fewer the number of links on the distribution machine
upstream load_pass {
    server 10.0.0.7:80;
    server 10.0.0.8:80;
}

upstream load_pass {
    server 10.0.0.7:80 weight=5;
    server 10.0.0.8:80;
}


#如果客户端都走相同代理, 会导致某一台服务器连接过多
upstream load_pass {
    ip_hash;
    server 10.0.0.7:80 weight=5;
    server 10.0.0.8:80;
}

nginx back-end state

State the front end side Nginx Web server load balancing Scheduling

status Outline
down The current server is temporarily not participate in load balancing
backup Reserved for the backup server
max_fails Allows the number of failed requests
fail_timeout After max_fails failure, suspension of service time
max_conns Limit the maximum number of connections received
upstream load_pass {
    #不参与任何调度, 一般用于停机维护
    server 10.0.0.7:80 down;
}


upstream load_pass {
    server 10.0.0.7:80 down;
    server 10.0.0.8:80 backup;
    server 10.0.0.9:80 max_fails=1 fail_timeout=10s;
}
 
location  / {
    proxy_pass http://load_pass;
    include proxy_params;
}

upstream load_pass {
    server 10.0.0.7:80;
    server 10.0.0.8:80 max_fails=2 fail_timeout=10s;
}


upstream load_pass {
    server 10.0.0.7:80;
    server 10.0.0.8:80 max_conns=1;
}

Nginx load balancing health checks

Nginx official in the module module, there is no health check module load-balancing backend nodes, but you can use third-party modules.
nginx_upstream_check_moduleTo detect health status of the back-end services.

Third-party modules Project Address: TP

1. Install dependencies

[root@lb02 ~]# yum install -y gcc glibc gcc-c++ pcre-devel openssl-devel patch

2. Download nginx source package as well as third-party modules Module nginx_upstream_check

[root@lb02 ~]# wget http://nginx.org/download/nginx-1.14.2.tar.gz
[root@lb02 ~]# wget https://github.com/yaoweibin/nginx_upstream_check_module/archive/master.zip

3. Extract nginx source package as well as third-party modules

[root@lb02 ~]# tar xf nginx-1.14.2.tar.gz
[root@lb02 ~]# unzip master.zip

4. Go to the directory nginx, patch (nginx version 1.14 patch on the choice, p1 representatives 1.14 in nginx directory, p0 is not nginx directory)

[root@lb02 ~]# cd nginx-1.14.2/
[root@lb02 nginx-1.14.2]# patch -p1 <../nginx_upstream_check_module-master/check_1.14.0+.patch
./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --add-module=/root/nginx_upstream_check_module-master --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'
[root@lb02 nginx-1.14.2]# make && make install

5. Increase the health check function on the existing load balancing

[root@lb01 conf.d]# cat proxy_web.conf
upstream node {
    server 172.16.1.7:80 max_fails=2 fail_timeout=10s;
    server 172.16.1.8:80 max_fails=2 fail_timeout=10s;
    check interval=3000 rise=2 fall=3 timeout=1000 type=tcp;
    #interval  检测间隔时间,单位为毫秒
    #rise      表示请求2次正常,标记此后端的状态为up
    #fall      表示请求3次失败,标记此后端的状态为down
    #type      类型为tcp
    #timeout   超时时间,单位为毫秒
}

server {
    listen 80;
    server_name node.drz.com;
    location / {
        proxy_pass http://node;
        include proxy_params;
    }

    location /upstream_check {
        check_status;
    }
}

nginx load balancing session remains

When using load balancing session remains have problems can be solved as follows.
1. nginx in ip_hashaccordance with the IP client, to request assignment to a corresponding IP
2. The server-based sessionsession sharing (NFS, MySQL, memcache, redis , file)

Painting solve load balancing problems, we need to understand sessionand cookiedistinction.
The browser is stored cookieeach time the browser requests sent to the server, the message header is automatically added cookieinformation.
The server will query the user cookieas a key to the store where to find the corresponding value (session)
agreed site under the domain name cookieis the same, so no matter several servers, both assigned to the same user requests on which server cookieis unchanged of. That cookiecorresponds sessionalso unique. So, here as long as more than one business server access the same shared storage server (NFS, MySQL, memcache, redis , file) on the line.

Configuring nginx on 1.web01

[root@web01 conf.d]# cat php.conf
server {
    listen 80;
    server_name php.drz.com;
    root /code/phpMyAdmin-4.8.4-all-languages;

    location / {
        index index.php index.html;
    }

    location ~ \.php$ {
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}
[root@web01 conf.d]# systemctl restart nginx

2. Installation phpMyAdmin (and are fitted web01 web02)

[root@web01 conf.d]# cd /code
[root@web01 code]# wget https://files.phpmyadmin.net/phpMyAdmin/4.8.4/phpMyAdmin-4.8.4-all-languages.zip
[root@web01 code]# unzip phpMyAdmin-4.8.4-all-languages.zip

3. Configure phpmyadmin remote database connection

[root@web01 code]# cd phpMyAdmin-4.8.4-all-languages/
[root@web01 phpMyAdmin-4.8.4-all-languages]# cp config.sample.inc.php config.inc.php
[root@web01 phpMyAdmin-4.8.4-all-languages]# vim config.inc.php
/* Server parameters */
$cfg['Servers'][$i]['host'] = '172.16.1.51';

4. Configure Authorization

chown -R www.www /var/lib/php/

[root@web01 phpMyAdmin-4.8.4-all-languages]# ll /var/lib/php/session/
总用量 4
-rw-------. 1 www www 2424 8月  21 18:41 sess_e96b27a6a628be47745a10a36e2fcd5a

img

5. configured nginx phpmyadmin on web01 profile and pushed onto the host web02

[root@web01 code]# scp -rp  phpMyAdmin-4.8.4-all-languages [email protected]:/code/
[root@web01 code]# scp /etc/nginx/conf.d/php.conf  [email protected]:/etc/nginx/conf.d/

6. reload Nginx service on the web02

[root@web02 code]# systemctl restart nginx

7. Authorization

[root@web02 code]# chown -R www.www /var/lib/php/

8. Load balancing access

[root@lb01 conf.d]# vim proxy_php.com.conf 
upstream php {
        server 172.16.1.7:80;
        server 172.16.1.8:80;
}
server {
        listen 80;
        server_name php.drz.com;
        location / {
                proxy_pass http://php;
                include proxy_params;
        }
}

[root@lb01 conf.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@lb01 conf.d]# systemctl restart nginx

After using load balancing polling function, you will find that if the session is stored in a local file, then never not get logged. drz.com

Redis solve the problem using the login session

1. Install redis-memory database

[root@db01 ~]# yum install redis -y

2. Configure redis listen on the network segment 172.16.1.0

[root@db01 ~]# sed  -i '/^bind/c bind 127.0.0.1 172.16.1.51' /etc/redis.conf

3. Start redis

[root@db01 ~]# systemctl start redis
[root@db01 ~]# systemctl enable redis

4.php session connection configuration redis

#1.修改/etc/php.ini文件
[root@web ~]# vim /etc/php.ini
session.save_handler = redis
session.save_path = "tcp://172.16.1.51:6379"
;session.save_path = "tcp://172.16.1.51:6379?auth=123" #如果redis存在密码,则使用该方式
session.auto_start = 1

#2.注释php-fpm.d/www.conf里面的两条内容,否则session内容会一直写入/var/lib/php/session目录中
;php_value[session.save_handler] = files
;php_value[session.save_path]    = /var/lib/php/session

The restart php-fpm

[root@web01 code]# systemctl restart php-fpm

6. The file configured to push the web01 web02

[root@web01 code]# scp /etc/php.ini [email protected]:/etc/php.ini  
[root@web01 code]# scp /etc/php-fpm.d/www.conf [email protected]:/etc/php-fpm.d/www.conf 

On restart php-fpm 5. web02

[root@web02 code]# systemctl restart php-fpm

6.redis View data

[root@db01 redis]# redis-cli
127.0.0.1:6379> keys *
1) "PHPREDIS_SESSION:1365eaf0490be9315496cb7382965954"

The difference between the cookie and session

Guess you like

Origin www.cnblogs.com/1naonao/p/11420825.html