Centos7 LVS load balancing DR model stand-alone and high availability practice

1. Load balancing cluster type

Type of load balancing technology: based on layer 4 load balancing technology, based on layer 7 load balancing technology

Load balancing implementation methods: hardware load balancing equipment, software load balancing

Hardware load balancing products: F5, Sangfor, Radware

Software load balancing products: LVS (Linux Virtual Server), Haproxy, Nginx, Ats (apache traffic server)

2. LVS is a four-layer (transport layer) load balancer that supports TCP/UDP load balancing

Three-tier structure : load scheduler, server pool, and shared storage.

Schema object :

  • VS : Virtual Server, also known as Director, load balancing server
  • RS : Real Server, real server, each node in the cluster
  • VIP : Director's IP that provides services to the outside world
  • DIP : Director communicates with RS internally
  • RIP : the IP of the real server
  • CIP : client IP

3. Four working modes of LVS

LVS-NAT (NAT mode)

Principle: The client sends a request packet to the load balancer (CIP->VIP), the load balancer modifies the packet and sends it to the server (CIP->RIP), and the server returns the packet to the load balancer after processing (RIP-> CIP), the load balancer refits the data packet and returns it to the client (VIP->CIP) to complete the load. In this mode, no matter whether the traffic comes in or goes out, it must pass through the load balancer.

LVS-DR (Direct Routing Mode)

Principle: The implementation of the DR mode requires the RS to be bound to the VIP of the LVS, and it is guaranteed that the VIP is only visible inside a single RS and not visible to the outside. When a request hits the load balancer (CIP->VIP), LVS only needs to modify the MAC address of the network frame to the MAC of a certain RS, and then forward the packet to the corresponding RS for processing, and the RS receives the forwarded message from LVS Packet, the link layer finds that the MAC is its own, and at the network layer, it finds that the IP is also its own (the ip of the LVS associated in advance), so the packet is legally received, and the RS returns directly to the client when responding, without going through the LVS.
DR mode traffic no longer passes through lvs, and the return can be directly returned to the client, only needing to pass through lvs once, so DR mode is the mode with the best performance.

LVS-TUN (IP tunnel (Tunnel) mode, not commonly used)

Principle: The load balancer encapsulates the data packet (CIP->VIP) sent by the client with a new IP header mark (DIP-(CIP-VIP)-RIP) and sends it to RS. Unpack the header of the packet, restore the data packet, and return it to the client directly after processing, without going through the load balancer. Note that because RS needs to restore the data packet sent by the load balancer, it must support the IPTUNNEL protocol 。So, in the RS kernel, the IPTUNNEL option must be compiled and supported.
​ 

FULL-NAT mode (two-way conversion mode, not commonly used)

Principle: The client initiates a request to the load balancer VIP (CIP->VIP), the load balancer receives the request and finds that it is a request for a backend service, performs full-nat on the request message, changes the source ip to DIP{, and changes the target The ip is converted to the RIP of any back-end RS, and then sent to the back-end, the RS responds after receiving the request, the response source ip is the RIP target ip or DIP, and the internal route is routed to the load balancer, and the load balancer receives the response report Text, perform full-nat. Change the source address to VIP, the destination address to CIP, and return to the client.

4. LVS load balancing algorithm

static load balancing

  • rr (round robin, polling)

  • wrr (weight round robin, weighted round robin)

  • sh (source hashing, source address HASH)

  • dh (destination hashing, destination address HASH)


Dynamic Load Balancing

  • lc (leash-connection, least connection)

    • Simple algorithm: active * 256 + inactive (whose choice is who)
  • wlc (weighted least connections)

    • Simple algorithm: (active * 256 + inactive) / weight (whose choice is who)
  • sed (least expected delay)

    • Simple algorithm: (active + 1) * 256 / weight (whose choice is who)
  • nq (never queue, never queue)

  • LBLC (Locality Based Least Connectivity)

  • LBLCR (locality-based least connectivity with replication)

5. Realize the construction of DR model

(1), node preparation
        node1: 192.168.2.168 ----as a load balancer

        node2:192.169.2.136

        node3:192.168.2.134

(2), start building

 Configure the lvs load balancer vip network subinterface (node1)

ifconfig enp0s3:2 192.168.2.100/24
# 撤掉命令
ifconfig enp0s3:2 down

Configure the RS server (node02, node03)

Modify the arp protocol (arp_ignore, arp_announce)

cd /proc/sys/net/ipv4/conf/enp0s3
# 修改协议(注意不可使用vi进行修改)
echo 1 > arp_ignore
echo 2 > arp_announce
# 查看是否修改成功
cat arp_ignore
cat arp_announce
# 退回到all目录进行全局修改
cd ../all
echo 1 > arp_ignore
echo 2 > arp_announce

Set hidden vip(node2\node3)

ifconfig lo:2 192.168.2.100 netmask 255.255.255.255

 Build http service (node2\node3)

# 安装httpd服务
yum install httpd -y
# 启动服务
service httpd start
# 创建主页(用于测试服务)
vi /var/www/html/index.html
# index填充内容node2
node2:form 192.168.2.136
# index填充内容node3
node3:form 192.168.2.134

 Open a browser to test whether the httpd service is successful

 LVS service configuration (node01)

# 安装ipvsadm
yum install ipvsadm
# 配置LVS入口规则(采用轮询机制),IP为新创建的enp0s3:2对应ip
ipvsadm -A -t 192.168.2.100:80 -s rr
#配置出口规则(192.168.2.136\192.168.2.134 分别是node2、3的ip,分配权重为1)
ipvsadm -a  -t 192.168.2.100:80  -r  192.168.2.136 -g -w 1
ipvsadm -a  -t 192.168.2.100:80  -r  192.168.2.134 -g -w 1

Check whether the configuration is successful

ipvsadm -ln

verify

Browser access 192.168.2.100, crazy F5, check whether the browser switches between node2 and node3, use the command netstat -natp to check the socket connection on node1~3 respectively, you will find that there is no socket record on node1, but there is on node2 and node3 The socket record further explains that the load balancer will not establish a connection with the client.
6. DR+keepalived high-availability construction

(1), prepare the node

        node1: 192.168.2.168 --lvs host

        node2:192.169.2.136

        node3:192.168.2.134

        node4: 192.168.2.181 --lvs standby machine

Note: If it has been built manually, the lvs stand-alone version (node1) can first delete the installed IP

# 卸载ipvsadm
ipvsadm -C
# 这里安装过enp0s3:2
ifconfig enp0s3:2 down

Node2 and node3 keep the configuration of the stand-alone version turned on

Install keeplived (node1\node4)

# 安装keeplived服务
yum install keepalived ipvsadm -y
# 修改配置
cd /etc/keepalived/
# 备份配置文件
cp keepalived.conf keepalived.conf.bak
# 打开并修改配置文件
vi keepalived.conf
# 修改配置(node1为master配置,node4配置区别地方有特殊说明,其余地方和node相同)
# vrrp_strict # 注意一定要注解掉,否则配置的vip接口ping不通
vrrp_instance VI_1 {
    state MASTER   # node4 BACKUP
    interface enp0s3 # 设置自己虚拟机使用的虚拟网络
    virtual_router_id 51
    priority 100    # node4 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass root # 主备机服务器密码要设置一致
    }
    virtual_ipaddress {
        192.168.2.100/24 dev enp0s3label  enp0s3:2
    }
}

# 注意virtual_server 只保留一个
virtual_server 192.168.2.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR   # 负载模式设置成DR
    nat_mask 255.255.255.0 #设置子网掩码
    persistence_timeout 0 # 多少秒内访问同一个服务器
    protocol TCP

    real_server 192.168.2.136 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.2.134 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

Copy keepalived.conf to node4, and modify the corresponding configuration, refer to the above steps (just modify the distinction)

scp  ./keepalived.conf  root@node04:`pwd`

Start the keepalived service (node1\node4)

# 启动服务
systemctl start keepalived
# 停止服务
systemctl stop keepalived

Verification: Browser access returns node2 or node3 home page results indicating success

Guess you like

Origin blog.csdn.net/qq_21875331/article/details/121746161