Linux enterprise combat-LVS+keepalived load balancing cluster

Use LVS to achieve load balancing

Implementation steps:

    #若在虚拟环境中需执行此步骤创建两个新的虚拟机,VMWARE可忽略此步骤
     
   真实主机:
              cd /var/lib/libvirt/images/
          ls
          qemu-img create -f qcow2 -b rhel7.6.qcow2 server3
          qemu-img create -f qcow2 -b rhel7.6.qcow2 server4
server1:
      pcs cluster disable --all
      pcs cluster stop --all
      systemctl status pcsd
      systemctl disable --now pcsd
      ssh server2 disable --now pcsd
      ssh server2 systemctl disable --now pcsd

Insert picture description here


  server3:
          hostnamectl set-hostname server3
          cd /etc/yum.repos.d/
          vim dvd.repo
          yum install httpd
          systemctl enable --now httpd
          systemctl start httpd
          cd /var/www/html/
          echo vm3> index.html
          ip addr add 172.25.19.100/24 dev eth0
          yum install -y arptables
          arptables -A INPUT -d 172.25.19.100 -j DROP
          arptables -A OUTPUT -s 172.25.19.100 -j mangle --mangle-ip-s 172.25.19.3

Insert picture description here

   server4:
          hostnamectl set-hostname server4
          cd /etc/yum.repos.d/
          vim dvd.repo
          yum install httpd
          systemctl enable --now httpd
          systemctl start httpd
          cd /var/www/html/
          echo vm3> index.html
          ip addr add 172.25.19.100/24 dev eth0
          yum install -y arptables
          arptables -A INPUT -d 172.25.19.100 -j DROP
          arptables -A OUTPUT -s 172.25.19.100 -j mangle --mangle-ip-s 172.25.19.4

Insert picture description here

 server2:
        curl server3
        curl server4
        yum install ipvsadm -y
        ip addr add 172.25.19.100/24 dev eth0
        ipvsadm -A -t 172.25.19.100:80 -s rr
        ipvsadm -a -t 172.25.19.100:80 -r 172.25.19.3:80 -g
        ipvsadm -a -t 172.25.19.100:80 -r 172.25.19.4:80 -g
        ipvsadm -ln

Insert picture description here
Insert picture description here

    真实主机:
          curl 172.25.19.100

Load balancing (polling mechanism) implementation
Insert picture description here

LVS+keepalived realizes load balancing health monitoring

Purpose

In the previous experiment, load balancing was achieved. In this experiment, we will combine keepalived to implement the health monitoring of the load balancing advanced group, that is, after the apache of a real host is closed, the host can be monitored and will no longer access the host with apache closed.

lab environment

Five virtual machines: home is the host, server1 and server2 are VS scheduling, and server3 and server4 are RS real hosts.

Experimental steps

1. Environment setup

server1、server2:   
  ipvsadm -C            清除IPVS中的IP,因为IPVS的配置会与keepalived冲突
   ipvsadm -ln
   yum install keepalived -y
   vim /etc/keepalived/keepalived.conf   编写此文件相当于用配置文件的方式实现负载均衡
        server1 keepalived.conf:
 ! Configuration File for keepalived    
global_defs {
    notification_email {
      root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_DEVEL
    vrrp_skip_check_adv_addr
    #vrrp_strict
    vrrp_garp_interval 0
    vrrp_gna_interval 0
 }
  
vrrp_instance VI_1 {
     state MASTER
     interface eth0
     virtual_router_id 51
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         172.25.19.100
     }
 }
  
virtual_server 172.25.19.100 80 {
     delay_loop 6
     lb_algo rr
     lb_kind DR
     #persistence_timeout 50
     protocol TCP
        real_server 172.25.19.3 80 {
         weight 1
         TCP_CHECK {
             connect_timeout 3
             nb_get_retry 3
             delay_before_retry 3
         }
     }
        real_server 172.25.19.4 80 {
         weight 1
         TCP_CHECK {
             connect_timeout 3
             nb_get_retry 3
             delay_before_retry 3
         }
     }
 }
   server2 keepalived.conf:
    ! Configuration File for keepalived
         global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from keepalived@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id LVS_DEVEL
       vrrp_skip_check_adv_addr
       #vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
     
   vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        priority 50
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            172.25.19.100
        }
    }
         virtual_server 172.25.19.100 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        #persistence_timeout 50
        protocol TCP
     
   real_server 172.25.19.3 80 {
            weight 1
            TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
            real_server 172.25.19.4 80 {
            weight 1
            TCP_CHECK {
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
    }
 systemctl restart keepalived.service 
 tail -f /var/log/messages    
 ipvsadm -ln      执行此命令后发现ip已配置完毕

Insert picture description here

2. Health Check

1) RS real host health detection

在server3中 关闭httpd服务
 
systemctl stop httpd

Execute in the real host at this time

curl 172.25.19.100

It is found that the host knows that the service of 3 is down, and the host policy no longer exists server3

Insert picture description here
Insert picture description here

And sent an email to server1 and server2
Insert picture description here

Then open httpd in server3, and detect in the host, and found that it has returned to normal

Insert picture description here

2) DS health check

关闭server1中的keepalived服务
 
[root@server1 ~]# systemctl stop keepalived.service 

此时在server2中:
 
tail -f /var/log/messages
 
发现server2已接管server1

Detecting RS in the host does not receive any impact, that is, keepalived solves the single point of failure of DS

Insert picture description here

Guess you like

Origin blog.csdn.net/qq_42958401/article/details/109299498