Notes Linux - Linux cluster of Chapter XVIII (c) Keepalived + LVS high-availability load balancing cluster

I. Introduction

The first two sections were introduced Linux high availability clustering and load balancing cluster, you can also combine these two, that is, high-availability load balancing cluster Keepalived + LVS component, Keepalived added to the LVS in the following reasons:

1) LVS has a very crucial role in Dir (distributor), if the distributor is down, all the services and access all will be interrupted. Because the entrance all in the Dir, so it is necessary to do dispatcher highly available, use Keepalived to achieve high availability, load balancing Keepalived in fact, have a role.

2) When using LVS, if there is no other additional operations, to which a Real Server shutdown, go visit will be a problem, because the LVS still will forward the request to the server has downtime, Keepalived emergence can be resolved this problem, even if there is a Real Server is down, it can provide normal service, when the request came distribute it automatically detects Real Server back-end down, then it will not continue to forward the request to question Real Server on.

At least one full Keepalived + LVS architecture requires two scheduler (role Dir), were installed Keepalived to achieve high availability. Keepalived built ipvsadm function, so no need to install ipvsadm package, do not write and execute lvs_dir script.

Second, create Keepalived + LVS high-availability load balancing cluster

Load balancing to achieve high availability cluster using the following Keepalived + LVS.

2.1 Prepare the cluster nodes

Preparation servers 4, wherein two distributor as standby (also called scheduler, referred to the Dir), Keepalived services are installed, the other two are Real Server, nginx services are installed, as the user requests the server process.

Hostname masternode: master scheduler, network card is 192.168.93.140 IP within the network (VMWare Network Address Translation NAT).

Hostname backupnode: backup scheduler, the card is 192.168.93.139 IP within the network (VMWare Network Address Translation NAT).

Hostname datanode1: Real Server 1, IP is 192.168.93.141.

Hostname datanode2: Real Server 2, IP is 192.168.93.142.

VIP:192.168.93.200

4 servers are set up IP address and host name, and turn off the firewall.

2.2 Set Keepalived master primary server

Edit the configuration file /etc/keepalived/keepalived.conf, set the following:

[@ masternode the root keepalived] # Vim / etc / keepalived / keepalived.conf
 ! the Configuration File for Master Node keepalived 

global_defs { 
    notification_email { 
        [email protected] 
    } 
    notification_email_from [email protected] 
    smtp_server 127.0 . 0.1 
    smtp_connect_timeout 30 
    the router_id of the LVS_DEVEL 
} 

vrrp_instance VI_1 { 
    State the mASTER 
    # is bound VIP card ens33 
    interface ens33 
    virtual_router_id 51 is 
    # priority of the primary server 100 
    priority 100 
    advert_int . 1
    {authentication 
        AUTH_TYPE the PASS 
        AUTH_PASS moonxy 
    } 
    virtual_ipaddress { 
        192.168 . 93.200 
    } 
} 

virtual_server 192.168 . 93.200  80 { 
    # once every 10 seconds to check the state of Real Server 
    delay_loop . 6 
    #LVS load balancing algorithm 
    lb_algo RR 
    #LVS the DR mode 
    lb_kind the DR 
    # same IP the connection is assigned to the same Real Server how many seconds, set here to 0 
    persistence_timeout 0 
    # TCP protocol status check Real Server 
    protocol TCP 

    real_server 192.168 . 93.141  80 {
        # Weight 
        weight 100 
        TCP_CHECK { 
            # 10 seconds without response timeout 
            connect_timeout The 10 
            nb_get_retry . 3 
            delay_before_retry . 3 
            connect_port 80 
        } 
    } 
    real_server 192.168 . 93.142  80 { 
        # weights 
        weight 100 
        TCP_CHECK { 
            # 10 seconds without response timeout 
            connect_timeout The 10 
            nb_get_retry . 3 
            delay_before_retry . 3
            connect_port 80
        }
    }
}

Good configuration settings master configuration file and backup server are substantially the same, the state is set to the BACKUP, the priority is set to 90

2.3 Setting backup backup server Keepalived

Edit the configuration file /etc/keepalived/keepalived.conf, set the following:

[@ backupnode the root keepalived] # Vim / etc / keepalived / keepalived.conf
 ! the Configuration File for Master Node keepalived 

global_defs { 
    notification_email { 
        [email protected] 
    } 
    notification_email_from [email protected] 
    smtp_server 127.0 . 0.1 
    smtp_connect_timeout 30 
    the router_id of the LVS_DEVEL 
} 

vrrp_instance VI_1 { 
    State the bACKUP 
    # binding to VIP card ens33 
    interface ens33 
    virtual_router_id 51 is 
    # priority setting backup server 90 
    priority 90 
    advert_int . 1
    {authentication 
        AUTH_TYPE the PASS 
        AUTH_PASS moonxy 
    } 
    virtual_ipaddress { 
        192.168 . 93.200 
    } 
} 

virtual_server 192.168 . 93.200  80 { 
    # once every 10 seconds to check the state of Real Server 
    delay_loop . 6 
    #LVS load balancing algorithm 
    lb_algo RR 
    #LVS the DR mode 
    lb_kind the DR 
    # same IP the connection is assigned to the same Real Server how many seconds, set here to 0 
    persistence_timeout 0 
    # TCP protocol status check Real Server 
    protocol TCP 

    real_server 192.168 . 93.141  80{
        # Weight 
        weight 100 
        TCP_CHECK { 
            # 10 seconds without response timeout 
            connect_timeout The 10 
            nb_get_retry . 3 
            delay_before_retry . 3 
            connect_port 80 
        } 
    } 
    real_server 192.168 . 93.142  80 { 
        # weights 
        weight 100 
        TCP_CHECK { 
            # 10 seconds without response timeout 
            connect_timeout The 10 
            nb_get_retry . 3 
            delay_before_retry . 3
            connect_port 80
        }
    }
}

If the LVS script performed before, you need to do something in advance:

# ipvsadm -C

# systemctl restart network

Or may restart the scheduling server.

2.4 Start-related services

In masternode start respectively above and backupnode keepalived service, as follows:

masternode:

[root@masternode log]# systemctl start keepalived
[root@masternode log]# ps aux |grep keepalived
root       7204  0.0  0.1 122876  1368 ?        Ss   00:46   0:00 /usr/sbin/keepalived -D
root       7205  0.0  0.3 133840  3360 ?        S    00:46   0:00 /usr/sbin/keepalived -D
root       7206  0.0  0.2 133776  2636 ?        S    00:46   0:00 /usr/sbin/keepalived -D
root       7215  0.0  0.0 112708   980 pts/0    R+   00:58   0:00 grep --color=auto keepalived
[root@masternode log]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:3b:90:07 brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.140/24 brd 192.168.93.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.93.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1bb9:5b87:893c:e112/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:3b:90:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.150.140/24 brd 192.168.150.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe3b:9011/64 scope link
       valid_lft forever preferred_lft forever
[root@masternode log]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.93.200:80 rr
  -> 192.168.93.141:80            Route   100    0          0
  -> 192.168.93.142:80            Route   100    0          0

backupnode:

[root@backupnode keepalived]# systemctl start keepalived
[root@backupnode keepalived]#  ps aux |grep keepalived
root       7185  0.0  0.1 122876  1376 ?        Ss   00:47   0:00 /usr/sbin/keepalived -D
root       7186  0.0  0.3 133840  3364 ?        S    00:47   0:00 /usr/sbin/keepalived -D
root       7187  0.0  0.2 133776  2640 ?        S    00:47   0:00 /usr/sbin/keepalived -D
root       7189  0.0  0.0 112708   980 pts/0    S+   00:47   0:00 grep --color=auto keepalived
[root@backupnode keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:88:29:9b brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.139/24 brd 192.168.93.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::8055:62bc:4a07:d345/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::1bb9:5b87:893c:e112/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::8cac:8f3b:14b2:47ae/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:88:29:a5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.150.139/24 brd 192.168.150.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe88:29a5/64 scope link
       valid_lft forever preferred_lft forever
[root@backupnode keepalived]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.93.200:80 rr
  -> 192.168.93.141:80            Route   100    0          0
  -> 192.168.93.142:80            Route   100    0          0

At this point you can see VIP: 192.168.93.200 bound to the network card ens33 masternode of.

Since Keepalived LVS pattern defined in the configuration file for DR, so it needs to be performed on two RS lvs_rs.sh script respectively, which is the same as a script, as follows:

[root@datanode1 sbin]# service nginx start
Redirecting to /bin/systemctl start nginx.service
[root@datanode1 sbin]# sh /usr/local/sbin/lvs_rs.sh
......
[root@datanode2 sbin]# service nginx start
Redirecting to /bin/systemctl start nginx.service
[root@datanode2 sbin]# sh /usr/local/sbin/lvs_rs.sh

After the completion of the relevant service starts, executable test.

2.5 Test high availability load balancing cluster

VIP address request on the host: 192.168.93.200, as follows:

We can see realizes load balancing.

1) Test off both schedulers maternode master scheduler

Let's look at the distributor of high availability, close masternode main distributor, as follows:

You can see masternode has unbound the VIP address, backupnode automatically bound by the VIP, as follows:

VIP address request again, as shown below normal:

2)测试关闭 Real Server 中的 datanode2 的 nginx 服务

现在我们再关闭 RS 中的 datanode2 的 nginx,如下:

访问 VIP 地址,看到了 Keepalived 已经自动将所有请求都转发到了 datanode1 上面了,如下:

ipvsadm 转发规则会自动将 "删除" datanode2,如下:

3)测试再次启动 Real Server 中的 datanode2 的 nginx

如果再次启动 datanode2 上面的 nginx 服务,此时 ipvsadm 又会自动添加其转发规则,如下:

访问 VIP,如下:

可以发现 Keepalived 功能确实很强大。

Guess you like

Origin www.cnblogs.com/cnjavahome/p/11318691.html