[Reprint] use nginx load balancing | hash algorithm, sticky viscous module implements session

Use nginx load balancing | hash algorithm, sticky viscous module implements session

 Copyright: creation easy, please indicate the source https://blog.csdn.net/ha_weii/article/details/81350087

An ordinary load balancing

1 to start the server nginx

Previously the / usr / local / nginx / sbin / nginx link under / sbin, so the direct use nginx command to open

2, modify the main configuration file /usr/local/nginx/conf/nginx.conf

user nginx nginx; # nobody original set nginx nginx group and

# Need to create a user

[root@server4 conf]# useradd -M -d /usr/local/nginx/ nginx

[root@server4 conf]# id nginx

UID = 500 (nginx) GID = 500 (nginx) groups = 500 (nginx)

#  -M, --no-create-home          do not create the user's home directory

#  -d, --home-dir HOME_DIR       home directory of the new account

events {

    worker_connections 65535; # 1024 where the original of a small

}

http statement block added:

        upstream westos {# this label and the following agents to the same

        server 172.25.28.2:80;

        server 172.25.28.3:80;

        }

Finally, add http face modeled on the example:

        server {

                listen 80;

                server_name www.westos.org # specified here can only access www.westos.org, access ip is wrong, so the client access must be resolved locally / etc / hosts

172.25.28.4 www.westos.org

                location / {

                proxy_pass http: // westos; # agent is westos

                }

        }

Kernel Limits> Operating System> Programs

Kernel

[root@server4 conf]# sysctl -a | grep file

fs.file-nr = 480 0 98861

fs.file-max = 98861 

3, the operating system configuration file limit

/etc/security/limits.conf

# End of file

nginx - nofile 65536 # 65535 large than the program limit, more than 98,861 small core restrictions

4, nginx -t check syntax errors

nginx -s reload to reload

5, test access

Note that we must first resolve, visit the domain name, if access ip, then there is no load balancing role, and direct the use nginx as a web server

6, load balancing test

Stopped a RS, does not affect access

Stopped two RS, reported a 502 error, this page is 50x.html under nginx default directory publishing

[root@server4 html]# pwd

/usr/local/nginx/html

[root@server4 html]# ls

50x.html  index.html

(Official document https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

There are many different types of load balancing implementation)

Second, the hashing algorithm

Modify the configuration file

http {

        upstream westos{

        ip_hash; # hash function added

        server 172.25.28.2:80;

        server 172.25.28.3:80;

        }

ip_hash is for the client, for the same client ip, nginx assigned to it RS is certain, unless the change RS abnormal stop, and for different client ip, different nginx RS assigned to it,

Third, changing the weights

(For experimental results, comments ip_hash)

http {

        upstream westos{

        #ip_hash;

        server 172.25.28.2:80 weight=2;

        server 172.25.28.3:80;

        }

The number of back-end server RS ​​appears twice in vm2 of vm3

Four, nginx web server itself acts as a

http {

        upstream westos{

        #ip_hash;

        server 172.25.28.2:80;

        server 172.25.28.3:80;

        server 127.0.0.1:80 backup; # backup

        }

Only when the back-end server vm2 and vm3 stopped, the machine will be heavy when the web server

Five, sticky viscous module implements session

When using load balancing session will encounter problems remain, commonly used methods are:

1.ip hash, according to the client's IP, requests assigned to different servers;

2.cookie, the server to the client issued a cookie, a cookie with a specific request will be allocated to its publisher,

Note: cookie require browser support, and sometimes data leaks
Sticky How it works:

Sticky nginx is a module, which is based on a cookie-nginx load balancing solution, through distribution and recognition cookie, to make the same request of a client falls on the same server, the default identity is named route

1. The client first initiates the access request, nginx after receiving the request header found no cookie, polling places a request to the backend server.

2. The back-end server processes the request, the response data returned to the nginx.

3. The nginx generated cookie with a route, and returned to the client. route corresponding to the value of the back-end server, may be plain, or it may be MD5, SHA1 and the like Hash value

4. The client receives the request, and stores the cookie with a route.

5. When the next time the client sends a request, will bring route, route Nginx according to the cookie value received is forwarded to the corresponding backend server.

 

Implementation process

nginx-1.10.1.tar.gz

nginx-sticky-module-ng.tar.gz

1, where the need to stop the original nginx-1.14.0, the current nginx-sticky-module-ng.tar.gz version does not support 14

nginx -s stop

2, the configuration of nginx version 1.10.1

Compile

./configure --prefix=/opt/nginx --with-http_ssl_module --with-http_stub_status_module --with-threads --with-file-aio --add-module=/root/nginx-sticky-module-ng

## Add Sticky module

## Do not put before the covers, modify the installation path

## This compilation is a statically compiled for this software, you need amazing what module to module before the write once, then the required modules written on the back, and dynamic compilation is only required to compile this, before the completion of the translation the module also

make 

make install

Add a profile

The copied original configuration file

[root@server4 conf]# pwd

/opt/nginx/conf

[root@server4 conf]# cp /usr/local/nginx/conf/nginx.conf .

cp: overwrite `./nginx.conf'? y

note:

[root@server4 conf]# which nginx

/sbin/nginx

[root@server4 conf]# ll /sbin/nginx

lrwxrwxrwx 1 root root 27 Aug  1 11:55 /sbin/nginx -> /usr/local/nginx/sbin/nginx

We do nginx before the link, but that version is 1.14.0, and now this can not be used directly nginx command, in fact, can make a soft link, here we use the absolute path

Check grammar

[root@server4 conf]# /opt/nginx/sbin/nginx -t

nginx: the configuration file /opt/nginx/conf/nginx.conf syntax is ok

nginx: configuration file /opt/nginx/conf/nginx.conf test is successful

Modify the configuration file

http {

        upstream westos{

        #ip_hash;

        sticky; # viscous algorithm

        server 172.25.28.2:80;

        server 172.25.28.3:80;

        #server 127.0.0.1:80 backup;    

        }       

3, test access

This is for the browser cookie, curl not the browser, so you can not use the curl, must be viewed in a browser inside

F12 Open Close cookie

 # What is Sticky? # In order to understand how Sticky, we can consider a question: how do load balancing?

DNS resolution, IP assigned to a different server in the DNS;

IP Hash, according to the client's IP, requests assigned to different servers;

cookie, the server to the client issued a cookie, a cookie with a specific request will be assigned to it issuer.

Sticky cookie that is based on a load balancing solution to the client and back-end server sessions to maintain, under certain conditions, to ensure that the same client can access the same back-end server via a cookie implementation. Request, the server sends a cookie, and said: next time bring, come to me directly.

Sticky nginx is a module, which is based on a cookie-nginx load balancing solution, through distribution and recognition cookie, to make the same request of a client falls on the same server, the default identity is named route

1. The client first initiates the access request, nginx after receiving the request header found no cookie, polling places a request to the backend server.

2. The back-end server processes the request, the response data returned to the nginx.

3. The nginx generated cookie with a route, and returned to the client. route corresponding to the value of the back-end server, may be plain, or it may be MD5, SHA1 and the like Hash value

4. The client receives the request, and stores the cookie with a route.

5. When the next time the client sends a request, will bring route, route Nginx according to the cookie value received is forwarded to the corresponding backend server.

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/11264241.html