Keepalived + LVS + nginx nginx to build high availability clusters

  nginx is a very good tool for reverse proxy, request distribution support, very useful features load balancing, caching, and so on. On the request processing, nginx uses epoll model, which is a model-based event listener, so it has a very efficient request processing efficiency, single concurrent capacity can reach millions. nginx received request can be distributed to its next level of application server load-balancing strategies, these servers are usually deployed in a cluster, thus insufficient performance, the application server can be extended by adding machine traffic way. In this case, the bottleneck for some large sites, the performance comes from nginx, and nginx as a stand-alone concurrency is capped, and nginx itself does not support the cluster model, so this time on the scale of nginx It is particularly important.
  
  keepalived tool is a state detection server and failover. In its configuration file, and the server can be configured standby server detects the status of the request. That keepalived may continue to send the request of the service provided during configuration to specify the server request, if the request 200 is the status code returned, it indicates that the server status is normal, if not normal, then this will keepalived off to the server offline, then the backup server to the online state.
  
  lvs is a four-story for load balancing tool. Four so-called load-balancing, the corresponding seven network protocol, the common protocol such as HTTP protocol is based on seven, and lvs acting on the layer-4 protocol, namely: the transport layer, network layer, data link and physical layers. Here's major transport layer protocols are TCP and UDP protocols, that is the way lvs main support TCP and UDP. It is also because lvs is located on four load balancing, and thus its ability to handle the request is higher than the very common server, such as nginx request processing is built on a network of seven, load balancing lvs is nginx more than ten times.
  
  Through the above description, we can be found in large sites, the application server can be a lateral expansion, and nginx does not support horizontal expansion, then nginx will become a performance bottleneck. And lvs is a load balancing tool, so if we combine lvs and nginx, then you can deploy multiple nginx server through load balancing lvs will request a balanced distribution to the various nginx server and then distributed by nginx server to each application server, so that we achieve a lateral extension of the nginx. Due to the nature nginx is an application server, and therefore it may also go down, so this can be achieved in conjunction with keepalived nginx fault detection and switching services. In other words, by keepalived + lvs + nginx, nginx we have achieved high availability cluster model.
  
  In the above description, we will note that although keepalived + lvs + nginx nginx achieve the cluster model, but when we use nginx, which itself is a ip and port, default listen port 80 and 443 , then the lvs is how to achieve a request to the server nginx with different ip and port it? This is achieved through a virtual ip, is outside the so-called virtual ip ip provides a common, external client requests are this ip, lvs after receiving the request for the virtual ip, scheduling and load balancing policy configuration, select a target nginx server, and then forwards the request to the server. Here there are two concepts lvs, and the scheduler is load balancing strategy, a so-called scheduler refers lvs will process the request and response data in ways that there are three main scheduler:
  
  Virtual Server via Network Address Translation (VS / NAT): The main principle of this approach is that the user sends a request to a virtual ip, lvs will choose a target processing service according to the load balancing algorithm, then the request target ip packets address modification calculated for the target server, and sent to the server. For the response packet, the source address in the response data dispatcher will return to the target server in the modified virtual address ip. In this way, the client, its form is intended for a single server. However, disadvantage of this approach is that all response data required by the scheduler, if the request is larger than the case, then the scheduler will become a bottleneck of the whole system.
  
  Virtual Server via IP Tunneling (VS / TUN): The main way to solve this is to VS / NAT, the response data will issue through the scheduler. Like VS / NAT, the data dispatcher will still receives the request, and modify the target ip packets target service ip, but the target service processed data, after which directly the source ip response packets modify a virtual ip, then sends the request to the client. In this manner, the response data is performed by the respective processing target service, without the need for return to the scheduler, in this way will greatly improve the system throughput, but also due to the general request message packet responds to a number smaller than, the scheduler only it needs to process the request packet, then the overall load on the system will be shared equally to each server.
  
  Virtual Server via Direct Routing (VS / DR): in this way with respect to VS / TUN, the main difference is that, VS / NAT ip address is a request message to modify the target service ip address, and VS / DR is It is to directly modify the MAC address request packet to the destination address, in this way will be more efficient, because the ip address VS / TUN is eventually need to convert the transmission data is a MAC address.
  
  1. Environmental ready
  
  1. VMware;
  
  2. 4 CentOs7 virtual host: 172.16.28.130, 172.16.28.131, 172.16.28.132, 172.16.28.133
  
  3. System Services: the LVS, Keepalived
  
  4. the Web Server: Nginx
  
  5. The cluster structures: LVS DR mode
  
  2. Software installed
  
  on the four virtual machines, we build clusters in the following way:
  
  172.16.28.130 LVS + keepalived
  
  172.16.28.131 LVS + keepalived
  
  172.16.28.132 nginx
  
  172.16.28.133 nginx
  
  Here we use the 172.16.28.130 and 172.16.28.131 two machines as lvs + keepalived work machine, that is the role of these two machines primarily for load balancing and fault detection and offline; we use 172.16.28.132 and 172.16 .28.133 two machines as an application server, mainly external service provider. The four server cluster as a whole back-end service, and to provide both virtual ip is 172.16.28.120. It should be noted, keepalived where the service is detected lvs two servers, two servers, one as the master server, one as a backup server, both in load balancing configuration is exactly the same. Under normal circumstances, when a client requests the virtual ip, LVS forwards the request to the master server, and the server select a master application server configured in accordance with the load balancing policy, and send the request to the application server for processing. If at some point, the LVS is down due to a failure of the master server, keepalived will detect the failure and fault offline, then the backup on the machine for providing a service line, to perform the functions of the failover.
  
  2.1 lvs + keepalived mounted
  
  on the mounting 172.16.28.131 172.16.28.130 and ipvs and keepalived:
  
  # mounted ipvs
  
  the sudo yum the install the ipvsadm
  
  # mounted keepalived
  
  the sudo yum the install keepalived
  
  mounted on nginx and 172.16.28.132 172.16.28.133:
  
  # installation nginx
  
  the sudo yum install nginx
  
  Note that, you need to turn off the firewall on both nginx server, otherwise lvs + keepalived two machines will not be able to request up to two nginx server:
  
  # turn off the firewall
  
  systemctl disable firewalld.service
  
  see two load-balancing machines whether to support the LVS:
  
  sudo lsmod | grep ip_vs
  
  # If you see the following results, then the support of
  
  [zhangxufeng @ localhost ~] $ sudo lsmod | grep ip_vs
  
  ip_vs 145 497 0
  
  nf_conntrack 137 239 1 ip_vs
  
  libcrc32c 12644 3 XFS, ip_vs, nf_conntrack
  
  If the above command no result, then execute the command sudo ipvsadm after starting ipvs, then you can see by the above command. After starting IPVS, we can in the / etc / keepalived / edit keepalived.conf file directory, we 172.16.28.130 machine as a master machine, master node configured as follows:
  
  # Free Join the Configuration
  
  global_defs {
  
  lvs_id director1 ID # specified in lvs
  
  }
  
  # the Configuration the VRRP
  
  vrrp_instance the LVS {
  
  State the mASTER # specify the current node is the master node
  
  interface ens33 # here ens33 is the name of the card can be viewed by ifconfig or ip addr
  
  virtual_router_id 51 # specified here is a virtual routing id, master node and backup node need to specify the same
  
  priority 151 # specifies the current node priority value greater the higher priority, Master backup node than the node
  
  advert_int 1 # send VRRP advertisement specified interval in seconds
  
  authentication {
  
  AUTH_TYPE the pASS # authentication, by default
  
  auth_pass 123456 # access password authentication
  
  }
  
  virtual_ipaddress {
  
  172.16.28.120 # specifies the virtual ip
  
  }
  
  }
  
  # virtual Server the configuration - for the WWW Server
  
  # backstage real host configuration
  
  virtual_server 172.16.28.120 80 {
  
  delay_loop # 1 health check interval
  
  lb_algo rr # load balancing strategy, here is the poll
  
  lb_kind DR # scheduling type, here is the DR
  
  persistence_time # 1 specifies the duration will hit the same request to the real host of the length of time
  
  protocol TCP # specifies the access backstage real host protocol type
  
  Real the Configuration Server 1 #
  
  # specifies the real host ip 1 and port
  
  real_server 172.16.28.132 80 {
  
  weight # 1 specifies the current host of weight
  
  TCP_CHECK {
  
  connection_timeout 10 # specified timeout heartbeat check
  
  nb_get_retry 3 # specify the heartbeat timeout after a number of repetitions
  
  delay_before_retry 3 # specifies how long a delay before attempting
  
  }
  
  }
  
  # 2 Real the Configuration Server
  
  real_server 172.16.28.133 {80
  
  weight www.tdcqpt.cn. 1 # designates the current weight of the host
  
  TCP_CHECK {
  
  connection_timeout designated # 10 check the heartbeat timeout
  
  nb_get_retry repetitions heartbeat timeout is specified after the #. 3
  
  delay_before_retry. 3 # specifies how long the delay before attempting
  
  }
  
  }
  
  }
  
  the above is the configuration of the master node keepalived for backup node, configured almost the master It is the same, just different state and its priority parameters. The following is a complete configuration backup node:
  
  # Free Join the Configuration
  
  global_defs {
  
  lvs_id director2 # specified lvs's ID
  
  }
  
  # the VRRP the Configuration
  
  vrrp_instance the LVS {
  
  State the BACKUP # Specify the current node is the master node
  
  interface ens33 # ens33 here is the name of the card can be viewed through the ifconfig or IP addr
  
  virtual_router_id 51 is # here specifies the virtual routing id, master node and a backup node need to specify the same
  
  priority 150 # specifies the priority of the current node, the greater the value the higher the priority, Master backup node than the node
  
  advert_int 1 # specify send VRRP advertisement interval, in seconds
  
  authentication {
  
  AUTH_TYPE pASS # authentication, by default
  
  auth_pass 123456 # to access the password authentication
  
  }
  
  virtual_ipaddress {
  
  172.16.28.120 # specifies the virtual ip
  
  }
  
  }
  
  # www.dongfangyuld.com the Configuration Server for the WWW virtual Server
  
  # true background host configuration
  
  80 {172.16.28.120 virtual_server
  
  delay_loop. 1 # health check interval
  
  lb_algo rr # load balancing policy, where polling
  
  lb_kind DR # scheduler type, this is the DR
  
  persistence_time. 1 # Length specifies the request hit the same real host the length of time
  
  protocol TCP # specifies the access backstage real host protocol type
  
  # 1 the Configuration Server real
  
  # ip 1 specifies the real host and port
  
  real_server 172.16.28.132 80 {
  
  weight # 1 specifies the current host of weight
  
  TCP_CHECK {
  
  connection_timeout designated 10 # check the heartbeat timeout
  
  nb_get_retry repetitions heartbeat timeout is specified after the #. 3
  
  delay_before_retry. 3 # specifies how long a delay before attempting
  
  }
  
  }
  
  # 2 Real the Configuration Server
  
  real_server 172.16.28.133 {80
  
  weight designated the current host. 1 # weight
  
  TCP_CHECK {
  
  connection_timeout # 10 specifies the heartbeat timeout check
  
  nb_get_retry repetitions heartbeat timeout is specified after the #. 3
  
  delay_before_retry. 3 # specifies how long a delay before attempting
  
  }
  
  }
  
  }
  
  reason will be exactly the same as the master and backup configured to be in when the master goes down, the backup can be configured in accordance with seamless handover services.
  
  After lvs + keepalived machine configuration is complete, we have the following configuration nginx equipped with two application servers. Here we nginx as the application server is arranged to return the profile in its status code 200, and the host will return the current ip, is configured as follows:
  
  worker_processes Auto;
  
  # PID /run/nginx.pid;
  
  Events {
  
  worker_connections 786;
  
  }
  
  HTTP {
  
  Server {
  
  the listen 80;
  
  # 200 here is a direct return status code and a text
  
  LOCATION / {
  
  default_type text / HTML;
  
  return 200 "! the Hello, Server [email protected] the Nginx www.kunlunyule.com \ n-" ;
  
  }
  
  }
  
  }
  
  worker_processes Auto;
  
  /Run/nginx.pid PID #;
  
  Events {
  
  worker_connections 786;
  
  }
  
  HTTP {
  
  Server {
  
  the listen 80;
  
  # 200 here is a direct return status code and a text
  
  LOCATION / {
  
  default_type www.renheyuLe.com text / HTML;
  
  return 200 "the Hello !, Nginx Server [email protected] \ the n-";
  
  }
  
  }
  
  }
  
  you can see, the text returned two machines in the host ip is not the same. After nginx configuration, may be performed by the following command to start:
  
  the sudo nginx
  
  after starting nginx, we need to configure a virtual IP, because we use the DR mode are lvs scheduler foregoing we mentioned before, in this mode, the client's response is real servers directly back to the client, and the real server needs to modify the response packets for the virtual source ip ip, ip is the virtual configuration here play this role of. We edit /etc/init.d/lvsrs file, write the following:
  
  # / bin / bash!
  
  Ifconfig LO: 0 172.16.28.120 172.16.28.120 up Netmask 255.255.255.255 Broadcast
  
  route the Add -host 172.16.28.120 dev LO: 0
  
  echo "0"> / proc / SYS / NET / IPv4 / ip_forward
  
  echo ". 1"> / proc / SYS / NET / IPv4 / the conf / LO / arp_ignore
  
  echo "2"> / proc / SYS / NET / IPv4 / the conf / LO / arp_announce
  
  echo ". 1" www.chenghylpt.com> / proc / SYS / NET / IPv4 / the conf / All / arp_ignore
  
  echo "2"> / proc / SYS / NET / IPv4 / the conf / All / arp_announce
  
  Exit 0
  
  LO: indicates that the current real name of the host network card;
  
  172.16.28.120: Indicates the virtual ip;
  
  run the script file can be written after the completion. Then two lvs + keepalived service on keepalived machine to start up:
  
  sudo Service Start keepalived
  
  last lvs can view the configuration of + keepalived strategy with the following command:
  
  [zhangxufeng keepalived @ localhost] $ sudo -ln the ipvsadm
  
  IP Virtual Server Version 1.2.1 (size = 4096)
  
  Prot LocalAddress: Port Scheduler Flags
  
  -> RemoteAddress:
  
  RR 172.16.28.120:80 the TCP
  
  -> 172.16.28.132:80 the Route. 1 0 0
  
  2.2 cluster tested
  
  according to the above steps, we completed a configuration lvs + keepalived + nginx cluster. In the browser, you can access http://172.16.28.120 we can see the following response:
  
  ! The Hello, Nginx Server [email protected]
  
  after repeatedly refresh your browser, you can see the text in a browser switch as follows this is because lvs load balancing strategy produced:
  
  ! the Hello, Nginx Server [email protected]
  
  3. Summary
  
  this article first lvs and works keepalived were explained, introduced several models of its work, then lvs + keepalived + nginx nginx to build a cluster approach to explain in detail and explains the issues that need attention.

Guess you like

Origin www.cnblogs.com/qwangxiao/p/11278978.html