At work, we may want different servers to handle different things, such as static files we want to use nginx and aoache processing, dynamic document and we hope apache tomcat to handle, image files we want to have to deal with squid. So in this case we can use nginx to achieve separation of static and dynamic load balancing
nginx's upstream module supports load balancing modes are five ways:
1), polling (default)
Each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed.
2)、weight
Polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance.
3)、ip_hash
Each request access by allocating hash result ip so that each visitor to access a fixed back-end server, can solve the problem of session.
4), fair (third party)
By the response time of the allocation request to the backend server, a short response time priority allocation.
5), url_hash (third party) url Tallahassee
Access more effective results when the hash url allocation request to the same url directed to the same backend server for the backend server cache
Examples of a, using static and dynamic load balancing and nginx separation
Prepare three hosts, one (server in the actual work environment can add more different functions) is nginx as a dispatcher, the other two as a back-end web server
Source compile and install nginx
1, related to the installation of build tools and its dependencies
[root@LVS2 ~]# yum -y install gcc gcc-c++ autoconf automake
[root@LVS2 ~]# yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel
zlib: nginx provide gzip module requires zlib library support
openssl: nginx provide ssl function
pcre: support for address rewriting rewrite function
[root@LVS2 src]# tar -zxvf nginx-1.16.1.tar.gz
[root@LVS2 src]# cd nginx-1.16.1
[root@LVS2 nginx-1.16.1]# ./configure --prefix=/usr/local/nginx --with-http_dav_module --with-http_stub_status_module --with-http_addition_module --with-http_sub_module --with-http_flv_module --with-http_mp4_module
parameter:
--with-http_dav_module enabled ngx_http_dav_module support (increase PUT, DELETE, MKCOL: create a collection, COPY and MOVE methods) is off by default, you need to compile open
--with-http_stub_status_module enabled ngx_http_stub_status_module support (get working condition since the nginx since the last start)
--with-http_addition_module enabled ngx_http_addition_module support (as an output filter, the buffer is not completely supported, in portions response request)
--with-http_sub_module enabled ngx_http_sub_module support (allows text to replace some other text nginx response)
--with-http_flv_module enabled ngx_http_flv_module support (seeking to provide memory-based file offset time)
--with-http_mp4_module enabled to mp4 file support (seeking to provide memory-based file offset time)
[root@LVS2 nginx-1.16.1]# make && make install
[root@LVS2 nginx-1.16.1]# useradd -s /sbin/nologin naginx
nginx main directory structure:
[root@LVS2 /]# ls /server/nginx-1.16.1/
conf html logs sbin
conf # Profiles
html # Web site root
logs logs #
sbin #nginx startup script
The main configuration file:
[root@LVS2 /]# ls /server/nginx-1.16.1/conf/nginx.conf
Start nginx:
[root@LVS2 /]# /server/nginx-1.16.1/sbin/nginx
[root@LVS2 /]# netstat -antup | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 5281/httpd
[root@LVS2 /]# netstat -antup | grep :80
boot:
[root@LVS2 nginx-1.16.1]# echo '/server/nginx-1.16.1/sbin/nginx & ' >> /etc/rc.local
nginx service daily operations:
Test the configuration file syntax:
[root@xuegod63 nginx-1.8.0]# /server/nginx-1.8.0/sbin/nginx -t
nginx: the configuration file /server/nginx-1.8.0/conf/nginx.conf syntax is ok
nginx: configuration file /server/nginx-1.8.0/conf/nginx.conf test is successful
Reload the configuration file
[root@xuegod63 nginx-1.8.0]# /server/nginx-1.8.0/sbin/nginx -s reload
Close nginx
[root@xuegod63 /]# /server/nginx-1.8.0/sbin/nginx -s stop
[Root @ xuegod63 /] # /server/nginx-1.8.0/sbin/nginx -s start # parameters not start
nginx: invalid option: "-s start"
Configuring nginx become a distributor, to achieve static and dynamic separation
[Root @ xuegod63 conf] # cd /server/nginx-1.8.0/conf # profile directory
[Root @ xuegod63 conf] # cp nginx.conf nginx.conf.back # backup of the configuration file
[root@xuegod63 conf]# vim nginx.conf
[Root @ xuegod63 nginx-1.8.0] # vim /server/nginx-1.8.0/conf/nginx.conf # nginx user to specify start
改:# user nobody;
为: user nginx nginx;
change:
43 location / {
44 root html;
45 index index.html index.htm; # in location / {. . . } Add the following #define distribution policy
location / {
root html;
index index.html index.htm;
if ($request_uri ~* \.html$){
proxy_pass http://htmlservers;
}
if ($request_uri ~* \.php$){
proxy_pass http://phpservers;
}
proxy_pass http://picservers;
}
Figure:
Comment out the following information, otherwise the php file parsing of nginx directly on the server, no longer resolve to back-end servers:
# location ~ \.php$ {
73 # root html;
74 # fastcgi_pass 127.0.0.1:9000;
75 # fastcgi_index index.php;
76 # fastcgi_param SCRIPT_FILENAME /server/nginx-1.8.0/html$fastcgi_script_name;
77 # include fastcgi_params;
78 # }
Figure:
# Define load balancing equipment Ip
# Define load balancing equipment Ip
In the configuration file nginx.conf last line } before, add the following:
upstream htmlservers {# define the load balancing group name server
Server 192.168.204.142:80;
Server 192.168.204.143:80;
}
upstream phpservers {
Server 192.168.204.142:80;
Server 192.168.204.143:80;
}
upstream picservers {
Server 192.168.204.142 : 80;
Server 192.168.204.143:80;
}
# Late at work, according to the work required to configure the IP address to specific business
Figure:
Save and exit.
Reload nginx server configuration file:
[root@xuegod63 conf]# /server/nginx-1.8.0/sbin/nginx -t
nginx: the configuration file /server/nginx-1.8.0/conf/nginx.conf syntax is ok
nginx: configuration file /server/nginx-1.8.0/conf/nginx.conf test is successful
[root@xuegod63 conf]# /server/nginx-1.8.0/sbin/nginx -s reload
The definition of back-end servers web1
[root@RS-WEB1 ~]# yum install httpd php -y
[root@RS-WEB1 ~]# echo 192.168.204.142 > /var/www/html/index.html
[root@RS-WEB1 ~]# vim /var/www/html/test.php
192.168.204.142-php
<?php
phpinfo();
?>
Define the back-end server web2
[root@RS-WEB2 ~]# yum install httpd php -y
[root@RS-WEB2 ~]# echo 192.168.204.143 > /var/www/html/index.html
[root@RS-WEB2 ~]# vim /var/www/html/test.php
192.168.204.143-php
<?php
phpinfo();
?>
Client Access 192.168.204.141 then see if the load balancing success, generating an environment can do different things by different definitions of service
Test performance:
Extended: number of open files too
[Root @ test html] # ab -n 1000 -c 1000 http://192.168.1.62/index.html # normal operation
[Root @ test html] # ab -n 2000 -c 2000 http://192.168.1.62/index.html # error
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.1.62 (be patient)
socket: when open files (24) # test, once too many open socket file Too many.
#ulimit -a # View
#ulimit -n
1024
The default while allowing a process to open up to file 1024
solve:
#ulimit -n 10240 # error solution
5 kinds of policy settings method Nginx load:
1, the polling (default)
Each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed.
upstream backserver {
server 192.168.1.62;
server 192.168.1.64;
}
2, specify the weight
Polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance.
upstream backserver {
server 192.168.1.62 weight=1;
server 192.168.1.64 weight=2;
}
3, IP Binding ip_hash
Each request access by allocating hash result ip so that each visitor to access a fixed back-end server, can solve the problem of session.
upstream backserver {
ip_hash;
server 192.168.1.62:80;
server 192.168.1.64:80;
}
4, fair (third party)
By the response time of the allocation request to the backend server, a short response time priority allocation.
upstream backserver {
server server1;
server server2;
fair;
}
5, url_hash (third party)
Access hash results to allocation request url, each url directed to the same back-end server, the back end server is effective when the cache.
upstream backserver {
server squid1:3128;
server squid2:3128;
hash $request_uri;
hash_method crc32;
}
Summary, expansion:
If tomcat, apache, squid configured as follows:
[Root @ xuegod63 conf] # vim nginx.conf # add the following to the end. Define the server group
upstream tomcat_servers {
server 192.168.1.2:8080;
server 192.168.1.1:8080;
server 192.168.1.11:8080;
}
upstream apache_servers {
server 192.168.1.5:80;
server 192.168.1.177:80;
server 192.168.1.15:80;
}
upstream squid_servers {
server 192.168.1.26:3128;
server 192.168.1.55:3128;
server 192.168.1.18:3128;
}