Linux 中的网卡配置与网络资源隔离

Linux 中的网卡

概述

1.网卡在linux中实际上就是用于通信的硬件资源,每一个网卡拥有唯一的MAC地址;
2.网卡实际上是通过文件的形式记录相关的网络配置信息;

查看机器的网卡方式

1.ip a

[root@iZwz91h49n3mj8r232gqweZ ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 315091224sec preferred_lft 315091224sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-d3d1516d3a7c
       valid_lft forever preferred_lft forever
8: veth09c86df@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether d2:95:5b:b9:df:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth5aaebfd@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether de:d7:5a:73:3b:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
12: vethfa01851@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether e2:94:b6:a5:f6:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 5
14: vethd6d7476@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 1a:70:f7:4e:66:01 brd ff:ff:ff:ff:ff:ff link-netnsid 2
16: veth2881aeb@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether f6:43:1a:e5:e7:8c brd ff:ff:ff:ff:ff:ff link-netnsid 4
18: vethf151547@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 9e:f6:a5:ed:09:59 brd ff:ff:ff:ff:ff:ff link-netnsid 3
20: veth0e78003@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether f6:92:4c:29:23:91 brd ff:ff:ff:ff:ff:ff link-netnsid 6
22: vethf7699f4@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 82:99:f5:a3:ad:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 7
24: vethed86455@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 66:d0:5f:bf:4f:dd brd ff:ff:ff:ff:ff:ff link-netnsid 8
26: veth9fe54e4@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP 
    link/ether 7e:f0:6f:ef:f7:32 brd ff:ff:ff:ff:ff:ff link-netnsid 9
32: veth1d37c72@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 4e:6a:c7:6f:a1:52 brd ff:ff:ff:ff:ff:ff link-netnsid 10
34: veth6991d55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 11
[root@iZwz91h49n3mj8r232gqweZ ~]# 

2. ip link show 比较简略版的查看方式

[root@iZwz91h49n3mj8r232gqweZ ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
8: veth09c86df@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether d2:95:5b:b9:df:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
10: veth5aaebfd@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether de:d7:5a:73:3b:65 brd ff:ff:ff:ff:ff:ff link-netnsid 1
12: vethfa01851@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether e2:94:b6:a5:f6:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 5
14: vethd6d7476@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether 1a:70:f7:4e:66:01 brd ff:ff:ff:ff:ff:ff link-netnsid 2
16: veth2881aeb@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether f6:43:1a:e5:e7:8c brd ff:ff:ff:ff:ff:ff link-netnsid 4
18: vethf151547@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether 9e:f6:a5:ed:09:59 brd ff:ff:ff:ff:ff:ff link-netnsid 3
20: veth0e78003@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether f6:92:4c:29:23:91 brd ff:ff:ff:ff:ff:ff link-netnsid 6
22: vethf7699f4@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether 82:99:f5:a3:ad:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 7
24: vethed86455@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether 66:d0:5f:bf:4f:dd brd ff:ff:ff:ff:ff:ff link-netnsid 8
26: veth9fe54e4@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-d3d1516d3a7c state UP mode DEFAULT 
    link/ether 7e:f0:6f:ef:f7:32 brd ff:ff:ff:ff:ff:ff link-netnsid 9
32: veth1d37c72@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT 
    link/ether 4e:6a:c7:6f:a1:52 brd ff:ff:ff:ff:ff:ff link-netnsid 10
34: veth6991d55@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT 
    link/ether 9e:24:c9:78:1f:8f brd ff:ff:ff:ff:ff:ff link-netnsid 11

3.ls /sys/class/net/ -la

[root@iZwz91h49n3mj8r232gqweZ ~]# ls /sys/class/net/ -la
total 0
drwxr-xr-x  2 root root 0 Nov 28 21:55 .
drwxr-xr-x 46 root root 0 Nov 28 21:55 ..
lrwxrwxrwx  1 root root 0 Dec  1 10:39 br-d3d1516d3a7c -> ../../devices/virtual/net/br-d3d1516d3a7c
lrwxrwxrwx  1 root root 0 Dec  1 09:09 docker0 -> ../../devices/virtual/net/docker0
lrwxrwxrwx  1 root root 0 Nov 28 13:55 eth0 -> ../../devices/pci0000:00/0000:00:03.0/virtio0/net/eth0
lrwxrwxrwx  1 root root 0 Nov 28 21:55 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx  1 root root 0 Dec  1 10:39 veth09c86df -> ../../devices/virtual/net/veth09c86df
lrwxrwxrwx  1 root root 0 Dec  1 10:39 veth0e78003 -> ../../devices/virtual/net/veth0e78003
lrwxrwxrwx  1 root root 0 Dec  1 12:46 veth1d37c72 -> ../../devices/virtual/net/veth1d37c72
lrwxrwxrwx  1 root root 0 Dec  1 10:39 veth2881aeb -> ../../devices/virtual/net/veth2881aeb
lrwxrwxrwx  1 root root 0 Dec  1 10:39 veth5aaebfd -> ../../devices/virtual/net/veth5aaebfd
lrwxrwxrwx  1 root root 0 Dec  1 12:49 veth6991d55 -> ../../devices/virtual/net/veth6991d55
lrwxrwxrwx  1 root root 0 Dec  1 10:39 veth9fe54e4 -> ../../devices/virtual/net/veth9fe54e4
lrwxrwxrwx  1 root root 0 Dec  1 10:39 vethd6d7476 -> ../../devices/virtual/net/vethd6d7476
lrwxrwxrwx  1 root root 0 Dec  1 10:39 vethed86455 -> ../../devices/virtual/net/vethed86455
lrwxrwxrwx  1 root root 0 Dec  1 10:39 vethf151547 -> ../../devices/virtual/net/vethf151547
lrwxrwxrwx  1 root root 0 Dec  1 10:39 vethf7699f4 -> ../../devices/virtual/net/vethf7699f4
lrwxrwxrwx  1 root root 0 Dec  1 10:39 vethfa01851 -> ../../devices/virtual/net/vethfa01851

4.ifconfig

[root@iZwz91h49n3mj8r232gqweZ ~]# ifconfig
br-d3d1516d3a7c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        ether 02:42:62:34:59:56  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:7d:a6:c6:8d  txqueuelen 0  (Ethernet)
        RX packets 7750  bytes 407515 (397.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11628  bytes 23845914 (22.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.252.139  netmask 255.255.255.0  broadcast 172.16.252.255
        ether 00:16:3e:0c:ae:fc  txqueuelen 1000  (Ethernet)
        RX packets 1092231  bytes 1181574745 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 657244  bytes 283089043 (269.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 289892  bytes 84866589 (80.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 289892  bytes 84866589 (80.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth09c86df: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether d2:95:5b:b9:df:58  txqueuelen 0  (Ethernet)
        RX packets 15528  bytes 1157565 (1.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15547  bytes 2528543 (2.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth0e78003: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether f6:92:4c:29:23:91  txqueuelen 0  (Ethernet)
        RX packets 10681  bytes 1323656 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7692  bytes 2497248 (2.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1d37c72: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 4e:6a:c7:6f:a1:52  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth2881aeb: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether f6:43:1a:e5:e7:8c  txqueuelen 0  (Ethernet)
        RX packets 5416  bytes 1847005 (1.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5420  bytes 2776343 (2.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth5aaebfd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether de:d7:5a:73:3b:65  txqueuelen 0  (Ethernet)
        RX packets 137812  bytes 9730387 (9.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 186692  bytes 166006549 (158.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth6991d55: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 9e:24:c9:78:1f:8f  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth9fe54e4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 7e:f0:6f:ef:f7:32  txqueuelen 0  (Ethernet)
        RX packets 7783  bytes 2440061 (2.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8310  bytes 2591380 (2.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethd6d7476: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 1a:70:f7:4e:66:01  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 420 (420.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethed86455: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 66:d0:5f:bf:4f:dd  txqueuelen 0  (Ethernet)
        RX packets 4607  bytes 2109794 (2.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4568  bytes 441813 (431.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethf151547: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 9e:f6:a5:ed:09:59  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 420 (420.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethf7699f4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 82:99:f5:a3:ad:b7  txqueuelen 0  (Ethernet)
        RX packets 182236  bytes 165432726 (157.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 135160  bytes 9415715 (8.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethfa01851: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether e2:94:b6:a5:f6:f5  txqueuelen 0  (Ethernet)
        RX packets 3934  bytes 2802594 (2.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5147  bytes 520951 (508.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

网卡配置文件

[root@iZwz91h49n3mj8r232gqweZ ~]# cd /etc/sysconfig/network-scripts/
[root@iZwz91h49n3mj8r232gqweZ network-scripts]# ll -la
total 240
drwxr-xr-x. 2 root root  4096 Nov 28 21:55 .
drwxr-xr-x. 6 root root  4096 Nov 28 21:55 ..
-rw-------  1 root root    38 Nov 28 13:55 ifcfg-eth0
-rw-r--r--  1 root root   254 Sep 12  2016 ifcfg-lo
lrwxrwxrwx  1 root root    24 Aug 18  2017 ifdown -> ../../../usr/sbin/ifdown
-rwxr-xr-x  1 root root   627 Sep 12  2016 ifdown-bnep
-rwxr-xr-x  1 root root  5817 Sep 12  2016 ifdown-eth
-rwxr-xr-x  1 root root  6196 May 26  2017 ifdown-ib
-rwxr-xr-x  1 root root   781 Sep 12  2016 ifdown-ippp
-rwxr-xr-x  1 root root  4201 Sep 12  2016 ifdown-ipv6
lrwxrwxrwx  1 root root    11 Aug 18  2017 ifdown-isdn -> ifdown-ippp
-rwxr-xr-x  1 root root  1778 Sep 12  2016 ifdown-post
-rwxr-xr-x  1 root root  1068 Sep 12  2016 ifdown-ppp
-rwxr-xr-x  1 root root   837 Sep 12  2016 ifdown-routes
-rwxr-xr-x  1 root root  1444 Sep 12  2016 ifdown-sit
-rwxr-xr-x. 1 root root  1621 Nov  6  2016 ifdown-Team
-rwxr-xr-x. 1 root root  1556 Apr 15  2016 ifdown-TeamPort
-rwxr-xr-x  1 root root  1462 Sep 12  2016 ifdown-tunnel
lrwxrwxrwx  1 root root    22 Aug 18  2017 ifup -> ../../../usr/sbin/ifup
-rwxr-xr-x  1 root root 12688 Sep 12  2016 ifup-aliases
-rwxr-xr-x  1 root root   859 Sep 12  2016 ifup-bnep
-rwxr-xr-x  1 root root 11880 Sep 12  2016 ifup-eth
-rwxr-xr-x  1 root root 10145 May 26  2017 ifup-ib
-rwxr-xr-x  1 root root 12039 Sep 12  2016 ifup-ippp
-rwxr-xr-x  1 root root 10525 Sep 12  2016 ifup-ipv6
lrwxrwxrwx  1 root root     9 Aug 18  2017 ifup-isdn -> ifup-ippp
-rwxr-xr-x  1 root root   642 Sep 12  2016 ifup-plip
-rwxr-xr-x  1 root root  1043 Sep 12  2016 ifup-plusb
-rwxr-xr-x  1 root root  2772 Sep 12  2016 ifup-post
-rwxr-xr-x  1 root root  4154 Sep 12  2016 ifup-ppp
-rwxr-xr-x  1 root root  1925 Sep 12  2016 ifup-routes
-rwxr-xr-x  1 root root  3263 Sep 12  2016 ifup-sit
-rwxr-xr-x. 1 root root  1755 Apr 15  2016 ifup-Team
-rwxr-xr-x. 1 root root  1876 Apr 15  2016 ifup-TeamPort
-rwxr-xr-x  1 root root  2682 Sep 12  2016 ifup-tunnel
-rwxr-xr-x  1 root root  1740 Sep 12  2016 ifup-wireless
-rwxr-xr-x  1 root root  4623 Sep 12  2016 init.ipv6-global
-rw-r--r--  1 root root 15780 Apr 13  2017 network-functions
-rw-r--r--  1 root root 26829 Sep 12  2016 network-functions-ipv6
[root@iZwz91h49n3mj8r232gqweZ network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@iZwz91h49n3mj8r232gqweZ network-scripts]# vim ifcfg-eth0 

内容如下

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
~           

解释

1.DEVICE=eth0 网卡的名字是 eth0
2.BOOTPROTO=dhcp 网卡启动类型:dhcp
3.ONBOOT=yes 开机自动启动
4.我们可以通过修改网卡配置文件的形式去修改信息  

网卡的IP操作

增加网卡的IP地址

增加之前网卡配置信息

[root@iZwz91h49n3mj8r232gqweZ network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 315089132sec preferred_lft 315089132sec

执行增加网卡命令

[root@iZwz91h49n3mj8r232gqweZ network-scripts]# ip addr add 192.168.0.88 dev eth0

备注

dev  意思device设备,网卡属于硬件
addr 地址  

增加之后网卡配置信息

[root@iZwz91h49n3mj8r232gqweZ network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 315089090sec preferred_lft 315089090sec
    inet 192.168.0.88/32 scope global eth0
       valid_lft forever preferred_lft forever

删除网卡的IP地址

[root@iZwz91h49n3mj8r232gqweZ ~]# ip addr delete 192.168.0.88/32 dev eth0
[root@iZwz91h49n3mj8r232gqweZ ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 315088740sec preferred_lft 315088740sec

备注

1.这里注意下192.168.0.88/32,这里注意32是子网掩码

网卡的状态

[root@iZwz91h49n3mj8r232gqweZ ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
    inet 172.16.252.139/24 brd 172.16.252.255 scope global dynamic eth0
       valid_lft 315088740sec preferred_lft 315088740sec

备注

1.其中里面的state状态 UP即为状态
2.网卡状态有以下几种:
  	UP
  	DOWN
  	NONE

网卡启动 ipup eth0

[root@iZwz91h49n3mj8r232gqweZ ~]# ipup eth0

网卡关闭 ipdown eth0

[root@iZwz91h49n3mj8r232gqweZ ~]# ipdown eth0

网卡的启动和关闭

service network restart

systemctl restart network

网卡的分类(分组namespace)和隔离

如何理解网卡的分组

1.一般来讲,一台Nginx服务器,可能有很多网卡,默认实际上是没有分组的,那没有分组,我们可以认为,这些网卡
  在分组上都是一组的,也就是默认一组;
2.网络隔离:主要是指的网络资源的隔离  ;

网卡分组操作

插件网卡分组 ip netns list

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns list

增加网卡分组 ip netns add ns1

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns add ns1
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns list   
ns1

删除网卡分组 ip netns delete ns1

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns list
ns1
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns delete ns1
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns list      
[root@iZwz91h49n3mj8r232gqweZ ~]# 

显示网卡分组的ip信息 ip netns exec ns1 ip a 或ip netns exec ns1 ip link show

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1.这个命令实际上后面跟查看ip的方式是一样的
  ip netns exec ns1 + 之前的查看ip的命令如(ip a 或者 ip link show)
2.增加分组之后,会有一个默认的本地lo的网卡  

Vitual Ethernet Pair (Veth Pair)

1.成对创建两个网卡,分别可以进行联通
ip link addd veth-ns1 type veth peer name veth-ns2

案例样例

创建成对网卡之前
[root@iZwz91h49n3mj8r232gqweZ ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
成对创建两个关联的网卡
ip link add veth-ns1 type peer name veth-ns2  
[root@iZwz91h49n3mj8r232gqweZ ~]# ip link add veth-ns1 type peer name veth-ns2
Garbage instead of arguments "name ...". Try "ip link help".
[root@iZwz91h49n3mj8r232gqweZ ~]# ip link add veth-ns1 type veth peer name veth-ns2 
[root@iZwz91h49n3mj8r232gqweZ ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
37: veth-ns2@veth-ns1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 1e:db:67:64:ab:1b brd ff:ff:ff:ff:ff:ff
38: veth-ns1@veth-ns2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether a2:55:52:3e:7e:cd brd ff:ff:ff:ff:ff:ff
设置网卡到指定的network namespace

**派送veth-ns1到ns1 [ip link set veth-ns1 netns ns1] **

[root@iZwz91h49n3mj8r232gqweZ ~]# ip link set veth-ns1 netns ns1
[root@iZwz91h49n3mj8r232gqweZ ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:0c:ae:fc brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:7d:a6:c6:8d brd ff:ff:ff:ff:ff:ff
6: br-d3d1516d3a7c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 02:42:62:34:59:56 brd ff:ff:ff:ff:ff:ff
37: veth-ns2@if38: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 1e:db:67:64:ab:1b brd ff:ff:ff:ff:ff:ff link-netnsid 12
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.这里注意下执行派送指定网络分组之后,再次执行ip link show,我们会发现veth-ns1@veth-ns2已经没有了;
2.实际上我们可以这样理解,veth-ns1@veth-ns2 这个之前是在默认的分组(或者是没有分组)里面,现在给指定到ns1分组里面去了,见下面ip netns exec ns1 ip link show
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
38: veth-ns1@if37: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether a2:55:52:3e:7e:cd brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@iZwz91h49n3mj8r232gqweZ ~]# 

派送veth-ns2到ns2 [ip link set veth-ns2 netns ns2]

[root@iZwz91h49n3mj8r232gqweZ ~]# ip link set veth-ns2 netns ns2
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
37: veth-ns2@if38: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 1e:db:67:64:ab:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0
这样成对网卡创建完成

设置网卡的ip地址 ip netns exec ns1 ip addr add 192.168.1.10/24 dev veth-ns1

Namespace ns1 增加ip

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip addr add  192.168.1.10/24 dev veth-ns1
[root@iZwz91h49n3mj8r232gqweZ ~]# 
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
38: veth-ns1@if37: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether a2:55:52:3e:7e:cd brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 192.168.1.10/24 scope global veth-ns1
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]#   

Namespace ns2 增加ip

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns2 ip addr add  192.168.1.11/24 dev veth-ns2
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns2 ip a 
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
37: veth-ns2@if38: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 1e:db:67:64:ab:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.11/24 scope global veth-ns2
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]#
1.ip netns exec ns1 意思是在ns1这个namespace空间下操作
2.ip addr add  192.168.1.10/24 dev veth-ns1 给网卡  veth-ns1增加ip 192.168.1.10 其中子网掩码是24
3.ip a 查看 会发现 目前已经能够显示我们新增的ip  
4.veth-ns2@if38: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 其中最后一个参数DOWN 标示当前的网卡未生效    
将设置的网卡生效

将Namespace下veth-ns1 网卡生效

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip link set veth-ns1 up
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
38: veth-ns1@if37: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN qlen 1000
    link/ether a2:55:52:3e:7e:cd brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 192.168.1.10/24 scope global veth-ns1
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# 

将Namespace下veth-ns2 网卡生效

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns2 ip link set veth-ns2 up 
[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns2 ip a 
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
37: veth-ns2@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 1e:db:67:64:ab:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.1.11/24 scope global veth-ns2
       valid_lft forever preferred_lft forever
    inet6 fe80::1cdb:67ff:fe64:ab1b/64 scope link 
       valid_lft forever preferred_lft forever
[root@iZwz91h49n3mj8r232gqweZ ~]# 

备注

1.<NO-CARRIER,BROADCAST,MULTICAST,UP> 其中最后一个参数UP 标示当前的网卡是生效了  
测试网卡是否已经生效

在Namespace ns1 下ping Namespace ns2中的ip

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns1 ping 192.168.1.11
PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
64 bytes from 192.168.1.11: icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from 192.168.1.11: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 192.168.1.11: icmp_seq=3 ttl=64 time=0.048 ms

在Namespace ns2 下ping Namespace ns1中的ip

[root@iZwz91h49n3mj8r232gqweZ ~]# ip netns exec ns2 ping 192.168.1.10     
PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data.
64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 192.168.1.10: icmp_seq=2 ttl=64 time=0.047 ms
64 bytes from 192.168.1.10: icmp_seq=3 ttl=64 time=0.050 ms
^Z
[2]+  Stopped                 ip netns exec ns2 ping 192.168.1.10
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.能够ping通 说明 两个网络已经生效;这样两个Namespace下的网络能够互相ping通了

如何验证两个网卡是否是成对的(Vitual Ethernet Pair)

安装 bridge-utils 网桥工具(yum install bridge-utils)

[root@iZwz91h49n3mj8r232gqweZ ~]# yum install bridge-utils
Loaded plugins: fastestmirror
base                                                                                                                                                                                        | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                                                                                            | 3.5 kB  00:00:00     
epel                                                                                                                                                                                        | 5.4 kB  00:00:00     
extras                                                                                                                                                                                      | 2.9 kB  00:00:00     
updates                                                                                                                                                                                     | 2.9 kB  00:00:00     
(1/3): epel/x86_64/updateinfo                                                                                                                                                               | 1.0 MB  00:00:00     
(2/3): updates/7/x86_64/primary_db                                                                                                                                                          | 5.1 MB  00:00:00     
(3/3): epel/x86_64/primary_db                                                                                                                                                               | 6.9 MB  00:00:00     
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package bridge-utils.x86_64 0:1.5-9.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================================================================================================================================================
 Package                                                Arch                                             Version                                              Repository                                      Size
===================================================================================================================================================================================================================
Installing:
 bridge-utils                                           x86_64                                           1.5-9.el7                                            base                                            32 k

Transaction Summary
===================================================================================================================================================================================================================
Install  1 Package

Total download size: 32 k
Installed size: 56 k
Is this ok [y/d/N]: y
Downloading packages:
bridge-utils-1.5-9.el7.x86_64.rpm                                                                                                                                                           |  32 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : bridge-utils-1.5-9.el7.x86_64                                                                                                                                                                   1/1 
  Verifying  : bridge-utils-1.5-9.el7.x86_64                                                                                                                                                                   1/1 

Installed:
  bridge-utils.x86_64 0:1.5-9.el7                                                                                                                                                                                  

Complete!
[root@iZwz91h49n3mj8r232gqweZ ~]# 

查看

[root@iZwz91h49n3mj8r232gqweZ ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
br-d3d1516d3a7c         8000.024262345956       no      veth09c86df
                                                        veth0e78003
                                                        veth2881aeb
                                                        veth5aaebfd
                                                        veth9fe54e4
                                                        vethd6d7476
                                                        vethed86455
                                                        vethf151547
                                                        vethf7699f4
                                                        vethfa01851
docker0         8000.02427da6c68d       no              veth20601e8
                                                        veth6991d55
[root@iZwz91h49n3mj8r232gqweZ ~]# 
1.这里注意下,这里docker0这个网桥下面有下面这些网卡
veth0e78003
veth2881aeb
veth5aaebfd
veth9fe54e4
vethd6d7476
vethed86455
vethf151547
vethf7699f4
vethfa01851
veth20601e8
veth6991d55
发布了261 篇原创文章 · 获赞 37 · 访问量 20万+

猜你喜欢

转载自blog.csdn.net/u014636209/article/details/103374944