Centos7.4 dual network card binding to achieve high availability

1. Use the bond method

 (1) Introduction of several main modes of bond
The first mode: mod=0, ie: (balance-rr) Round-robin policy (balance-rr)
Features: The order of transmitting packets is sequentially transmitted (ie: the first The packet goes to eth0, the next packet goes to eth1... After passing through different links in the middle, the problem of out-of-order arrival of data packets may occur on the client side, and the out-of-order data packets need to be sent again, so that the network throughput will decrease.


The second mode: mod=1, that is: (active-backup) Active-backup policy (main-backup policy)
Features: only one device is active, when one is down, the other is immediately converted to the main device by the backup. The mac address is visible from the outside. From the outside, the MAC address of the bond is unique to avoid confusion in the switch. This mode only provides fault tolerance; it can be seen that the advantage of this algorithm is that it can provide high availability of network connections, but its resource utilization is low, and only one interface is in a working state. In the case of N network interfaces, The resource utilization rate is 1/N


The third mode: mod=2, that is: (balance-xor) XOR policy (balance policy)
Features: transmit data packets based on the specified transmission HASH policy. The default strategy is: (source MAC address XOR destination MAC address) % slave count. Other transmission policies can be specified through the xmit_hash_policy option. This mode provides load balancing and fault tolerance. The


fourth mode: mod=3, namely: broadcast (broadcast policy)
Features: transmit each packet on each slave interface, this mode Provides fault tolerance


The fifth mode: mod=4, ie: (802.3ad) IEEE 802.3ad Dynamic link aggregation (IEEE 802.3ad Dynamic Link Aggregation)
Features: Create an aggregation group that share the same speed and duplex settings. According to the 802.3ad specification, multiple slaves work under the same active aggregate.
Slave election for outgoing traffic is based on the transport hash policy, which can be changed from the default XOR policy to other policies via the xmit_hash_policy option. It should be noted that not all transmission strategies are 802.3ad compliant, especially considering the packet out-of-order problem mentioned in Section 43.2.4 of the 802.3ad standard. Different implementations may have different adaptations.
Prerequisites:
Condition 1: ethtool supports getting the speed and duplex settings of each slave
Condition 2: switch (switch) supports IEEE 802.3ad Dynamic link aggregation
Condition 3: most switches (switches) need to be specifically configured to support 802.3ad Mode


The sixth mode: mod=5, that is: (balance-tlb) Adaptive transmit load balancing (adapter transmit load balancing)
Features: No special switch (switch) supports channel bonding. Outgoing traffic is distributed on each slave based on the current load (calculated based on speed). If the slave receiving data fails, another slave takes over the MAC address of the failed slave.

Prerequisite for this mode: ethtool supports getting the rate of each slave.


The seventh mode: mod=6, namely: (balance-alb) Adaptive load balancing (adapter adaptive load balancing)

Features: This mode includes the balance-tlb mode, plus receive load balance (rlb) for IPV4 traffic, and does not require any switch (switch) support. Receive load balancing is achieved through ARP negotiation. The bonding driver intercepts the ARP reply sent by the local machine, and rewrites the source hardware address to the unique hardware address of a slave in the bond, so that different peers use different hardware addresses to communicate.

2. Configure bond

(1) Experimental environment
Physical network port: eth0, eth1
virtual port after binding: bond0
IP address: 192.168.128.13
Gateway: 192.168.128.2
Mask: 255.255.255.0
DNS: 202.96.128.166

(2) View and load bound

[root@localhost ~]# modprobe --first-time bonding
[root@localhost ~]# lsmod|grep bonding           

bonding               132885  0 

After the loading is successful, you can view the bond0 port

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.128.13/24 brd 192.168.128.255 scope global eth0
    inet6 fe80::20c:29ff:fe94:b529/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:0c:29:94:b5:33 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN 
    link/ether 66:51:2f:37:a2:31 brd ff:ff:ff:ff:ff:ff

(3) Configure virtual port bound0

In the /etc/sysconfig/network-scripts/ directory, create the ifcfg-bond0 file

vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.128.13
NETMASK=255.255.255.0
GATEWAY=192.168.128.2
DNS1=202.96.128.166

(4) Configure the physical network card eth0, eth1

vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
MASTER=bond0

SLAVE=yes

(5) Modify modprobe related setting files

vim /etc/modprobe.d/bonding.conf
alias bond0 binding

options bond0 miimon=100 mode=0 //Mode 0, miimon is used for link monitoring, and the interval for checking is specified later, in ms

(6) Restart and test

[root@localhost network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.128.13/24 brd 192.168.128.255 scope global bond0
    inet6 fe80::20c:29ff:fe94:b529/64 scope link tentative dadfailed 

       valid_lft forever preferred_lft forever

Check the working status of bond:

[root@localhost bonding]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 200
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:0c:29:94:b5:29
Slave queue ID: 0

Slave Interface: eth1
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:0c:29:94:b5:33
Slave queue ID: 0

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325447430&siteId=291194637