LVS + Keepalived cluster-------Detailed configuration and installation

Keepalived hot backup

  • Introduction to
    Keepalived Keepalived was originally a powerful auxiliary tool designed specifically for LVS. It was mainly used to provide failover and health checking functions; it can determine the availability of LVS load balancing scheduler and node servers in a timely manner. Isolate and replace with a new server, and rejoin the cluster when the host is restored.

  • Keepalived hot backup method
    1. Keepalived adopts VRRP (Virtual Router Redundancy Protocol, Virtual Router Redundancy Protocol) hot backup protocol to realize the multi-machine hot backup function of Linux server in software.

    2. VRRP is a backup solution for routers-a hot standby group is formed by the opposite routers, which provide services to the outside through a shared virtual IP address;
    each hot standby group has only one main router at the same time to provide services, Other routers are in a redundant state. If the current router fails, other routes will automatically take over the virtual IP address (the priority determines the order of succession) to continue to provide services.

    3. Each router in the hot standby group may become the main router. The IP address (VIP) of the virtual router can be transferred before the routers in the hot standby group, so it is also called drifting IP address. When using keepalived, the drifting address The implementation does not need to manually establish the configuration of the virtual interface, but is automatically managed by keepalived according to the configuration file.

LVS + Keepalived high availability cluster

  • The design goal of Keepalived is to build a highly available LVS load balancing cluster. You can call the ipvsadm tool to create virtual servers and manage server pools, not just for dual-system hot standby

  • Use Keepalived to build an LVS cluster is easier to use

  • Main advantages: Realize hot standby switching for LVS load scheduler to improve availability; perform health check on nodes in the server pool, automatically remove failed nodes, and rejoin after recovery

  • In the LVS cluster structure based on LVS + Keepalived, at least two hot standby load schedulers and three or more node servers are included

  • When using Keepalived to build an LVS cluster, you also need to use the ipvsadm tool, but most of the work will be done automatically by Keepalived, and you do not need to manually execute ipvsadm (except for viewing and monitoring the cluster)

Build LVS + Keepalived high availability cluster

This demonstration will be based on the LVS cluster in DR mode, add a load scheduler, and use keepalived to implement the hot backup of the master and slave schedulers, thereby building an LVS website cluster platform with both load balancing and high availability capabilities
This demonstration is based on this: LVS load balancing-DR mode

Deploy the network environment

It does not need eNsp topology diagram to realize, only need to add a load scheduler.

1. A scheduler (vm1 connection method)
changes ip: 192.168.100.21/24 gateway: 192.168.100.1 and restarts the network card;

2.
Change the ip of the two node servers (vm1 connection mode) : 192.168.100.22/24 gateway: 192.168.100.1 and restart the network card;
change the ip: 192.168.100.23/24 gateway: 192.168.100.1 and restart the network card;

3. An NFS shared storage (vm1 connection method)
change ip: 192.168.100.24/24 gateway: 192.168.100.1 restart the network card

4. Add a new scheduler (vm1 connection mode),
change ip: 192.168.100.20/24 gateway: 192.168.100.1, restart the network card

Turn off the firewall, core protection, and install the local yum source.
Except for the two schedulers that need to change the configuration, the rest do not need to be configured again

Configure the new load scheduler (192.168.100.20)

  • This load scheduler will act as the master scheduler

[1] Adjust /proc response parameters

[root@localhost network-scripts]# vi /etc/sysctl.conf 
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

[root@localhost network-scripts]# sysctl -p     ###生效
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0 

[2] Adjust keepalived parameters

[root@localhost ~]# yum -y install keepalived ipvsadm
[root@localhost ~]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.conf.bak               ####必须要备份一下
[root@localhost keepalived]# vi keepalived.conf            ####为了防止改错,将原内容全部删除,插入添加以下内容
 
global_defs {
  router_id HA_TEST_R1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 1
  priority 100
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass 123456
  }
virtual_ipaddress {
  192.168.100.100
  }
}

virtual_server 192.168.100.100 80 {
delay_loop 15
lb_algo rr
lb_kind DR
persistence 60
protocol TCP

real_server 192.168.100.22 80 {
    weight 1
    TCP_CHECK {
        connect_port 80
        connect_timeout 3
        nb_get_retry 3
        delay_before_retry 4
    }
 }
real_server 192.168.100.23 80 {
    weight 1
    TCP_CHECK {
        connect_port 80
        connect_timeout 3
        nb_get_retry 3
        delay_before_retry 4
    }
 }
}



[root@localhost keepalived]# systemctl start keepalived             ####启动keepalived
[root@localhost keepalived]# systemctl enable keepalived            ####开机启动keepalived
[root@localhost keepalived]# ip addr show dev ens33                 ####查看主控制IP地址和漂移地址

Insert picture description here

  • ! ! ! The following content is just an explanation of the above script, not for output, only for reference! ! !
global_defs {
   router_id HA_TEST_R1              ####本路由器(服务器)的名称 HA_TEST_R1
}
vrrp_instance VI_1 {                ####定义VRRP热备实列
   state MASTER                    ####热备状态,master表示主服务器
   interface ens33                ####表示承载VIP地址的物理接口
   virtual_router_id 1            ####虚拟路由器的ID号,每个热备组保持一致
   priority 100                  ####优先级,优先级越大优先级越高
   advert_int 1                     ####通告间隔秒数(心跳频率)
   authentication {               ####认证信息,每个热备组保持一致
   auth_type PASS              ####认证类型
   auth_pass 123456             ####认证密码
   }
virtual_ipaddress {              ####漂移地址(VIP),可以是多个
  192.168.100.100
   }
}

virtual_server 192.168.100.100 80 {        ####虚拟服务器地址(VIP)、端口
delay_loop 15                ####健康检查的时间间隔(秒)
lb_algo rr                    ####轮询调度算法
lb_kind DR                    ####直接路由(DR)群集工作模式
persistence 60               ####连接保持时间(秒),若启用请去掉!号
protocol TCP                 ####应用服务采用的是TCP协议

real_server 192.168.100.22 80 {       ####第一个WEB站点的地址,端口
    weight 1                  ####节点的权重
    TCP_CHECK {                  ####健康检查方式
    connect_port 80                  ####检查端口目标
    connect_timeout 3                ####连接超时(秒)
    nb_get_retry 3                 ####重试次数
    delay_before_retry 4               ####重试间隔(秒)
    }
}
real_server 192.168.100.23 80 {
    weight 1
    TCP_CHECK {
    connect_port 80
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 4
    }
 }
}

Configure the load scheduler (192.168.100.21)

  • This load scheduler will act as a backup backup scheduler

[1] Clear load distribution strategy

[root@localhost /]# ipvsadm -C             ####因为 keepalived 会自动分配,不清除的可能会冲突

[2] Adjust keepalived parameters

[root@localhost ~]# yum -y install keepalived ipvsadm
[root@localhost ~]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.conf.bak
[root@localhost keepalived]# vi keepalived.conf
 
global_defs {
  router_id HA_TEST_R2
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 1
priority 99
advert_int 1
authentication {
  auth_type PASS
  auth_pass 123456
  }
virtual_ipaddress {
  192.168.100.100
  }
}

virtual_server 192.168.100.100 80 {
delay_loop 15
lb_algo rr
lb_kind DR
persistence 60
protocol TCP

real_server 192.168.100.22 80 {
    weight 1
    TCP_CHECK {
        connect_port 80
        connect_timeout 3
        nb_get_retry 3
        delay_before_retry 4
    }
 }
real_server 192.168.100.23 80 {
    weight 1
    TCP_CHECK {
        connect_port 80
        connect_timeout 3
        nb_get_retry 3
        delay_before_retry 4
        }
 }
}


[root@localhost keepalived]# systemctl start keepalived             ####启动keepalived
[root@localhost keepalived]# systemctl enable keepalived            ####开机启动keepalived
[root@localhost keepalived]# ip addr show dev ens33                 ####查看主控制IP地址和漂移地址


At this time, the main server is still online, the VIP address is actually still controlled by the main server, and other servers are in standby state, so no VIP address will be added to the ens33 interface in the standby server

  • ! ! ! In the backup backup scheduler, the script only needs to adjust a few parameter values. The following will not output, just for reference! ! !

    global_defs {
      router_id HA_TEST_R2                  #### 本路由器(服务器)的名称  HA_TEST_R2
    }
        .........
        ...
    vrrp_instance VI_1 {
    state BACKUP            #####热备状态,backup 表示备用服务器
    interface ens33
    virtual_router_id 1
    priority 99           #####优先级,数值应低于主服务器
    .......
    .....
    

The configuration in the configuration of two node servers and NFS shared server will not be changed

For configuration, please refer to: LVS Load Balancing Cluster ---------- DR (Direct Route) Mode

Real machine test

  • web access test
    Insert picture description here
    Insert picture description here

  • Test connectivity. When the
    master scheduler is enabled, the ARP request MAC address is the address of the
    Insert picture description here
    master scheduler. When the master scheduler is disabled, the ARP request MAC address is the address of the standby scheduler.
    Insert picture description here

Guess you like

Origin blog.csdn.net/XCsuperman/article/details/108751348