Linux dual NIC teaming and Bridge
A bridge binding introduction
linux operating system, dual NIC binding has seven modes. Now the general business will use dual-card access, so that both add network bandwidth, but also to do the appropriate redundancy, can be said that a lot of advantages. The company will use its own general linux operating system NIC teaming mode , of course, now some card manufacturers will do for the windows operating system NIC NIC teaming management software (windows operating system does not require third-party NIC teaming support), a total which way, which is relatively long with 0/1/6:
1: NIC teaming case, do first binding, then the card after then bind configured to bridge:
1.1: a first set of configuration, the bindings for eth1 and eth5 bond0:
1.1.1: create bond0 configuration steps that file and reads as follows:
[root@linux-host1 ~]# cd /etc/sysconfig/network-scripts/ [root@linux-host1 network-scripts]# cp ifcfg-eth0 ifcfg-bond0 [Root @ linux-host1 network-scripts] # cat ifcfg-bond0 # follows: BOOTPROTO=static NAME=bond0 DEVICE=bond0 ONBOOT=yes BONDING_MASTER=yes BONDING_OPTS = "mode = 1 miimon = 100" # 1 designated as the binding type and link state monitoring interval BRIDGE = br0 # bridged to br0
1.1.2: Configuration br0:
TYPE=Bridge BOOTPROTO=static IPV4_FAILURE_FATAL=no NAME=br0 DEVICE=br0 ONBOOT=yes IPADDR=X.X.X.X NETMASK=255.255.255.0 GATEWAY=X.X.X.X
1.1.3: eth1 configuration:
[root@linux-host1 network-scripts]# vim ifcfg-eth1 BOOTPROTO=static NAME=eth1 DEVICE=eth1 ONBOOT=yes NM_CONTROLLED=no MASTER=bond0 USERCTL = no SLAVE=yes
1.1.4: eth5 configurations:
[root@linux-host1 network-scripts]# cp ifcfg-eth1 ifcfg-eth5 [root@linux-host1 network-scripts]# vim ifcfg-eth5 BOOTPROTO=static NAME=eth5 DEVICE=eth5 ONBOOT=yes NM_CONTROLLED=no MASTER=bond0 USERCTL = no SLAVE=yes
1.1.5: Restart network services:
[root@linux-host1 network-scripts]# systemctl restart network
1.1.6: Verify that the network is working:
[root@linux-host1 network-scripts]# ping www.baidu.com PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data. 64 bytes from 61.135.169.125: icmp_seq=1 ttl=128 time=6.17 ms 64 bytes from 61.135.169.125: icmp_seq=2 ttl=128 time=10.3 ms 64 bytes from 61.135.169.125: icmp_seq=3 ttl=128 time=5.36 ms 64 bytes from 61.135.169.125: icmp_seq=4 ttl=128 time=6.74 ms 64 bytes from 61.135.169.125: icmp_seq=5 ttl=128 time=5.71 ms
1.1.:6: You can verify the current one which is bound to the card:
[root@linux-host1 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth1 # backup link card MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 18:66:da:f3:34:e5 Slave queue ID: 0 Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0a:f7:99:ba:d1 Slave queue ID: 0
1.2: the second set of configuration, and eth2 are bond1 eth6 bindings for:
1.2.1: Creating bond1 profile:
[root@linux-host1 network-scripts]# cp ifcfg-bond0 ifcfg-bond1 [root@linux-host1 network-scripts]# vim ifcfg-bond1 BOOTPROTO=static NAME=bond1 DEVICE=bond1 TYPE=Bond BONDING_MASTER=yes BOOTPROTO=static NAME=bond1 ONBOOT=yes BONDING_OPTS="mode=1 miimon=100" BRIDGE=br1
1.2.2: Configure br1, br1 is bound to only the master mode, no DNS1, and gateway:
TYPE=Bridge BOOTPROTO=static IPV4_FAILURE_FATAL=no NAME=br1 DEVICE=br1 ONBOOT=yes IPADDR=X.X.X.X NETMASK=255.255.255.0
1.2.3: eth2 configurations:
[root@linux-host1 network-scripts]# vim ifcfg-eth2 BOOTPROTO=static NAME=eth2 DEVICE=eth2 ONBOOT=yes NM_CONTROLLED=no MASTER=bond1 USERCTL = no SLAVE=yes
1.2.4: eth6 configurations:
[root@linux-host1 network-scripts]# vim ifcfg-eth6 BOOTPROTO=static NAME=eth6 DEVICE=eth6 ONBOOT=yes NM_CONTROLLED=no MASTER=bond1 USERCTL = no SLAVE=yes
1.2.5: Restart network services:
[root@linux-host1 network-scripts]# systemctl restart network
1.2.6: Testing the network is normal within the network:
[root@linux-host1 network-scripts]# ping 192.168.20.12 PING 192.168.20.12 (192.168.20.12) 56(84) bytes of data. 64 bytes from 192.168.20.12: icmp_seq=1 ttl=64 time=1.86 ms 64 bytes from 192.168.20.12: icmp_seq=2 ttl=64 time=0.570 ms 64 bytes from 192.168.20.12: icmp_seq=3 ttl=64 time=0.410 ms
1.3: Setting the boot:
[root@linux-host1 network-scripts]# vim /etc/rc.d/rc.local ifenslave eth1 eth5 ifenslave eth2 eth6 [root@linux-host1 network-scripts]# chmod a+x /etc/rc.d/rc.local
1.4: After rebooting the system to verify network