The enterprise model three lvs: DR Mode + Keepalived (HA HA)

Analog and HA in the DR mode LVS

1, the experimental environment:
Here Insert Picture Description
Keepalived server (the same two settings) Setting:

1, download the installation package to compile and install

keepalived-2.0.6.tar.gz # source package needs to be compiled and then installed.
tar -zxf keepalived-2.0.6.tar.gz #-extracting compressed file
cd keepalived-2.0.6 # to enter the open-extracting file
yum install gcc openssl-devel -y # install supporting software to compile the source package
./configure - -prefix = / usr / local / keepalived --with-init = systemd # compiler
make && make install # install

2, installation management tools lvs the ipvsadm
yum -y install the ipvsadm # install the management tools
Note: only need to install the tool, do not require policy settings.

3 Set links, easy access keepalived profile

ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ 
ln -s /usr/local/keepalived/etc/keepalived/ /etc
ln -s /usr/local/keepalived/sbin/keepalived /sbin

4 keepalived edit the configuration file
172.25.13.110

! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost #节点宕机了将会接收到异常邮件的主机
   }
   notification_email_from keepalived@localohost #邮件发送人
   smtp_server 127.0.0.1  #发送的服务器
   smtp_connect_timeout 30 #指定连接超时时间
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER  #主节点表示
    interface eth0
    virtual_router_id 13
    priority 100  #权重
    advert_int 1  #检查的间隔1s
    authentication {
        auth_type PASS  #认证方式
        auth_pass 1111  #认证的密码
    }
    virtual_ipaddress {
        172.25.13.100
    }
}

virtual_server 172.25.13.100 80 {
    delay_loop 6 #连接失败六次之后,发送邮件
    lb_algo rr  #lvs调度算法
    lb_kind DR  #lvs该工作方式
    protocol TCP  # 端口

    real_server 172.25.13.130 80 {
       		TCP_CHECK {
		weight 1
		connect_port 80
           	connect_timeout 3
        }
    }

    real_server 172.25.13.140 80 {
       		TCP_CHECK {
		weight 1
		connect_port 80
           	connect_timeout 3
        }
    }
}

172.25.13.120

! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost #节点宕机了将会接收到异常邮件的主机
   }
   notification_email_from keepalived@localohost #邮件发送人
   smtp_server 127.0.0.1  #发送的服务器
   smtp_connect_timeout 30 #指定连接超时时间
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP  #主节点表示
    interface eth0
    virtual_router_id 13
    priority 50  #权重
    advert_int 1  #检查的间隔1s
    authentication {
        auth_type PASS  #认证方式
        auth_pass 1111  #认证的密码
    }
    virtual_ipaddress {
        172.25.13.100
    }
}

virtual_server 172.25.13.100 80 {
    delay_loop 6 #连接失败六次之后,发送邮件
    lb_algo rr  #lvs调度算法
    lb_kind DR  #lvs该工作方式
    protocol TCP  # 端口

    real_server 172.25.13.130 80 {
       		TCP_CHECK {
		weight 1
		connect_port 80
           	connect_timeout 3
        }
    }

    real_server 172.25.13.140 80 {
       		TCP_CHECK {
		weight 1
		connect_port 80
           	connect_timeout 3
        }
    }
}

5 Turn service, and set the service to start automatically start
systemctl Start keepalived
systemctl enable keepalived

Real server settings
1 install httpd startup settings to access the page
Note: the premises in order to verify the effect of load balancing, real back-end server set different access content, clearly marked content from different back-end servers.

2 Add to VIP respective physical network card
ip addr add 172.25.13.100/24 dev eth0 # eth0 NIC to temporarily add ip

3 Set arptable strategy to tackle all the user sends a request to the VIP DS instead of RS

yum install arptables -y # install the management tools

arptables -A INPUT -d 172.25.47.100 -j DROP # setting does not respond to a request for their VIP's

arptables -A OUTPUT -s 172.25.47.100 -j mangle --mangle -ip-s 172.25.47.120
source ip # Set the packet sent modified VIP

Test:
1 Set up after the completion of the real server to see all ipvsadm automatically generate load balancing policy on two schedulers:

[root@toto2 keepalived-2.0.6]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.13.100:http rr
  -> 172.25.13.130:http           Route   1      0          0         
  -> 172.25.13.140:http           Route   1      0          0    

2 using the client host test:

Normal load balancing

[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto4————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto4————real_server

3 Close http service of the real server 172.25.13.140. The simulation server failure, the real test server in addition to being kicked out of the list will be automatically added to the list when normal service time

[root@toto4 ~]# systemctl stop httpd   # 关闭一个真实服务器

[root@toto2 keepalived-2.0.6]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.13.100:http rr
  -> 172.25.13.130:http           Route   1      0          0          # 已经关闭的服务器被移除出列表。

Client Access:

[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server

After the failed server recovery, it will automatically detect added to the list:

[root@toto4 ~]# systemctl start httpd

[root@toto2 keepalived-2.0.6]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.13.100:http rr
  -> 172.25.13.130:http           Route   1      0          0         
  -> 172.25.13.140:http           Route   1      0          0           #恢复之后自动添加

4 current load balancing server to work on the master server, the master server adds vip, turn off the main server keepalived, simulated load balancing master server anomaly, this time will be switched to provide load balancing services from the server, and vip will automatically shift to the secondary server .

172.25.13.110: Master

[root@toto1 samples]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ca:52:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.25.13.110/24 brd 172.25.13.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.13.100/32 scope global eth0     # 添加vip自动
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feca:52c8/64 scope link 
       valid_lft forever preferred_lft forever

#关闭keepalived 模拟主服务器异常,
[root@toto1 samples]# systemctl stop keepalived

#再次查看ip,vip已经不在该主机上面
[root@toto1 samples]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:ca:52:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.25.13.110/24 brd 172.25.13.255 scope global eth0  
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feca:52c8/64 scope link 
       valid_lft forever preferred_lft forever

Abnormal load balancing master server, the service will be provided by the secondary server, vip will automatically shift to the secondary server:

[root@toto2 keepalived-2.0.6]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:43:f6:eb brd ff:ff:ff:ff:ff:ff
    inet 172.25.13.120/24 brd 172.25.13.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.13.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe43:f6eb/64 scope link 
       valid_lft forever preferred_lft forever

#客户端测试服务正常:
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto4————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto4————real_server
[kiosk@foundation13 Desktop]$ curl 172.25.13.100
toto3————real_server

Analysis: When the master load balancer hang up, backup load balancer is responsible for starting the load balancing function, to avoid a single point of failure, to achieve high-availability load balancing! ! !

Guess you like

Origin blog.csdn.net/Y950904/article/details/93485492