Achieve high availability of LVS + Keepalived

One LVS

1.1 lvs provides multiple scheduling algorithms

轮询调度(Round-Robin Scheduling)
加权轮询调度(Weighted Round-Robin Scheduling)
最小连接调度(Least-Connection Scheduling)
加权最小连接调度(Weighted Least-Connection Scheduling)
基于局部性的最少链接(Locality-Based Least Connections  Scheduling)
带 复 制 的 基 于 局 部 性 最 少 链 接  Locality-Based Least  Connections with Replication Scheduling
目标地址散列调度(Destination Hashing Scheduling)
源地址散列调度(Source Hashing Scheduling)
最短预期延时调度(Shortest Expected Delay Scheduling)
不 排 队 调 度 ( Never Queue Scheduling )对应: rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq

1.2 lvs three forwarding rules

NAT:简单理解,就是数据进出都通过 LVS,性能不是很好。
TUNL:简单理解:隧道 
DR:最高效的负载均衡规则 

1.3 lvs architecture

The bottom-most data sharing storage layer, using Shared Storage for
the front-end load balancing layer, Load Balancer for the
middle server cluster layer, and Server Array for
the user, all internal applications are transparent, users are only in Use a high-performance service provided by a virtual server

What is second keepAlived?

Because all requests are subject to load balancing, load balancing must be very important and cannot be hung up. To put it bluntly is to keep the lvs alived.
The function of Keepalived is to check the status of the server. If a web server is down or the work fails, Keepalived will detect and remove the faulty server from the system, and use other servers to replace the work of the server. After working normally, Keepalived will automatically add the server to the server group. All these tasks are automatically completed without manual intervention. All that is required is to repair the failed server.

2.1 Working principle

Layer3,4,5工作在IP/TCP协议栈的IP层,TCP层,及应用层.
原理如下:
Layer3:Keepalived使用Layer3的方式工作式时,Keepalived会定期向服务器群中的服务器
,发送一个ICMP的数据包(既我们平时用的Ping程序).
如果发现某台服务的IP地址没有激活,Keepalived便报告这台服务器失效,并将它从服务器群中剔除.
这种情况的典型例子是某台服务器被非法关机。

Layer3的方式是以服务器的IP地址是否有效作为服务器工作正常与否的标准。
Layer4:如果您理解了Layer3的方式,Layer4就容易了。
Layer4主要以TCP端口的状态来决定服务器工作正常与否。
如web server的服务端口一般是80,如果Keepalived检测到80端口没有启动,
则Keepalived将把这台服务器从服务器群中剔除。

Layer5:Layer5对指定的URL执行HTTP GET。然后使用MD5算法对HTTP GET结果进行求和。
如果这个总数与预期值不符,那么测试是错误的,服务器将从服务器池中移除。
该模块对同一服务实施多URL获取检查。如果您使用承载多个应用程序服务器的服务器,则此功能很有用。
此功能使您能够检查应用程序服务器是否正常工作。
MD5摘要是使用genhash实用程序(包含在keepalived软件包中)生成的。

2.2 Advantages of lvs

Strong load resistance, because the logic of the lvs working mode is very simple, and working at the network layer 4 is only used for request distribution, there is no traffic, so there is basically no need to consider too much in terms of efficiency.
There is a complete dual-machine hot backup solution. When a node fails, lvs will automatically determine, so the overall system is very stable.
Basically, it can support all applications. Because lvs works on layer 4, it can load balance almost all applications, including http, database, chat room, etc.

2.3 lvs load balancing mechanism

lvs 是四层负载均衡,也就是说建立在 OSI 模型的第四层——传输层之上
传输层上有 TCP/UDP,lvs 支持 TCP/UDP 的负载均衡
因为 LVS 是四层负载均衡,因此它相对于其它高层负载均衡的解决办法, 
比如 DNS 域名轮流解析、应用层负载的调度、客户端的调度等,它的效 率是非常高的

lvs 的转发可以通过修改 IP 地址实现(NAT 模式)
lvs 的转发还可以通过修改直接路由实现(DR 模式)

Three configuration

3.1 Environmental preparation

One: Two scheduler (dual-machine hot standby)
systems: Linux—CentOS 6
IP address: 192.168.9.45 (main)
IP address: 192.168.9.46 (standby)
2: Two web server
systems: Linux—CentOS 6
IP address : 192.168.8.216 (web01)
IP address: 192.168.8.218 (web02)

Three: win10 test client
One can be used on the intranet.

3.2 Use Keepalived to build a dual-system hot backup

Step 1: Configure ipvsadm and keepalived of the main server

[root@dd01 ~]# modprobe ip_vs  //加载ip_vs模块
[root@dd01 ~]# yum install ipvsadm -y  //安装管理软件ipvsadm
[root@dd01 ~]# yum -y install gcc gcc-c++ make popt-devel kernel-devel openssl-devel
//安装编译工具与插件
#将keepalived上传到目录中
[root@01 ~]# tar xzvf keepalived-1.2.13.tar.gz 
[root@01 ~]# cd keepalived-1.2.13/
[root@01 keepalived-1.2.13]# ./configure --prefix=/ 
[root@01 keepalived-1.2.13]# make && make install 
[root@01 keepalived-1.2.13]# cp keepalived/etc/init.d/keepalived /etc/init.d/ 
 //加入系统管理服务,如果init.d已存在,那么就不需要复制
[root@01 keepalived-1.2.13]# systemctl enable keepalived 
//开机自启动

[root @ 01 keepalived-1.4.2] # vi /etc/keepalived/keepalived.conf
// Edit configuration file

! Configuration File for keepalived
global_defs {
router_id LVS_01 //本服务器的名称
}
vrrp_instance VI_1 { //定义VRRP热备实例
state MASTER //热备状态,MASTER表示主服务器,BACKUP表示从服务器
interface eth1 //承载VIP地址的物理接口
virtual_router_id 51 //虚拟路由器的ID号,每个热备组保持一致
priority 110 //优先级,数值越大优先级越高
advert_int 1 //通告间隔秒数(心跳频率)
authentication { //热备认证信息,每个热备组保持一致
auth_type PASS //认证类型
auth_pass 123456 //密码字符串
}
virtual_ipaddress { //指定漂移地址(VIP),可以有多个
192.168.9.145
}
}
virtual_server 192.168.9.145 80 { //虚拟服务器地址(VIP)、端口
delay_loop 6 //健康检查的间隔时间(秒)
lb_algo rr //轮询(rr)调度算法
lb_kind DR //直接路由(DR)群集工作模式
persistence_timeout 60 //连接保持时间(秒)
protocol TCP //应用服务器采用的是TCP协议
real_server 192.168.8.216 80 { //第一个Web服务器节点的地址、端口
weight 1 //节点的权重
TCP_CHECK { //健康检查方式
connect_port 80 //检查的目标端口
connect_timeout 5 //连接超时(秒)
nb_get_retry 3 //重试次数
delay_before_retry 3 //重试间隔
}
}
real_server 192.168.8.218 80 { //第二个Web服务器节点的地址、端口
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dd01 keepalived]# service keepalived start ##启动服务
[root@dd01 keepalived]# ip addr show dev eth1 //验证绑定了的虚拟地址
[root@localhost keepalived]# ip addr show dev eth1
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:9f:07:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.9.45/22 brd 192.168.11.255 scope global eth1
    inet 192.168.9.145/32 scope global eth1
    inet6 fe80::20c:29ff:fe9f:731/64 scope link 
       valid_lft forever preferred_lft forever

[root @ dd01 keepalived-1.4.2] # ipvsadm --L // View LVS virtual server
Insert picture description here

Step 2: Configure the slave server

The same steps as the main scheduler 192.168.8.146
vi /etc/keepalived/keepalived.conf // Edit the configuration file

! Configuration File for keepalived
global_defs {
router_id LVS_02
}
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 105
advert_int 1
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.9.145
}
}
virtual_server 192.168.9.145 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 60
protocol TCP
real_server 192.168.8.216 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.8.218 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 5
nb_get_retry 3
delay_before_retry 3
}
}
}

[root@dd02 keepalived]# service keepalived start
[root@dd02 keepalived]# ipvsadm –L

Step 3: Verify the results

Keepalived service of the master server is on, verify the slave server status
[root @ dd02 keepalived-1.4.2] # ip addr show dev eth1
Insert picture description here

Simulated primary server fails, the master server keepalived service dd01 closed state from the server to verify dd02
state following
Insert picture description here
hot standby completed structures

3.3 Configure Web Node Server

Step 1: Configure 192.168.8.216 192.168.8.218 Configure nginx service, which can be accessed normally.
I will not write here.
Insert picture description here

Step 2: Configure DR and configure on two web servers

#!/bin/bash
#Author:zhang
#Date:2020.3
vip=192.168.9.145
mask='255.255.255.255'
dev=lo:1
case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig $dev $vip netmask $mask #broadcast $vip up
    route add -host $vip dev $dev
    echo "The RS Server is Ready!"
    ;;
stop)
    ifconfig $dev down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo "The RS Server is Canceled!"
    ;;
*) 
    echo "Usage: $(basename $0) start|stop"
    exit 1
    ;;
esac
sysctl -p &>/dev/null

ifconfig // View virtual interface
Insert picture description here

3.4 Test LVS + Keepalived high availability cluster

In the browser of the client, the web page content can be accessed normally through the drift address (192.168.80.100) of the LVS + Keepalived cluster, and the cluster is successfully constructed.

Verify that the two Web servers are polling.
Client access http://192.168.8.145
// Since the connection retention time is set to 60 seconds, re-visit the address after one minute

// Automatically poll to another web server

3.5 Simulate the failure of the main scheduler and verify the results

[root @ dd01 keepalived-1.4.2] # systemctl stop keepalived
// The main scheduler keepalived stops working

// Automatically switch from the scheduler, continue to work Visit http://192.168.9.145, view the results // dual-machine hot standby display operation.
Bold style

Insert picture description here

Four simulated Web server failure

[root @ bb ~] # service nginx stop
visit http://192.168.9.145 and
Insert picture description here
you can see that it is successful.

Finally,
LVS + Keepalived has been successfully built and tested

Published 77 original articles · liked 0 · visits 3232

Guess you like

Origin blog.csdn.net/liaowunonghen/article/details/105069272