docker 容器网络——单主机容器网络通信

     当我们在单台物理机或虚拟机中运行多个docker容器应用时,这些容器之间是如何进行通信的呢,或者外界是如何访问这些容器的? 这里就涉及了单机容器网络相关的知识。docker 安装后默认情况下会在宿主机上创建三种类型的网络,我们可以通过:docker network ls 查看,如下所示:

docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8ad1446836a4        bridge              bridge              local
3be441aa5d9f        host                host                local
e0542a92df5c        none                null                local

下面将分别介绍一下这三种网络:

1. none

    none网络,只有回环网络,容器内部只挂载了lo虚拟网卡,创建了该网络的容器是不能跟外界进行通信的。我们可以通过--network=none 指定none网络,下面创建一个none网络的容器,并查看容器里的网络设备。

[root@VM_0_12_centos ~]# docker run -dit --net=none --name=bbox3 busybox 
7c97d179f742bcd280d2f436427b18b2381c9588773ca612cd71bf840e0db2ea
[root@VM_0_12_centos ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
7c97d179f742        busybox             "sh"                2 seconds ago       Up 2 seconds                            bbox3

进入容器只有lo网卡 

[root@VM_0_12_centos ~]# docker exec -it bbox3 sh / # ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
访问外界网络
/ # ping www.baidu.com ping: bad address 'www.baidu.com'

可以ping通本地回环网络
/ # ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.068 ms 64 bytes from 127.0.0.1: seq=1 ttl=64 time=0.051 ms 64 bytes from 127.0.0.1: seq=2 ttl=64 time=0.067 ms 64 bytes from 127.0.0.1: seq=3 ttl=64 time=0.068 ms 64 bytes from 127.0.0.1: seq=4 ttl=64 time=0.071 ms ^C --- 127.0.0.1 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max = 0.051/0.065/0.071 ms

无法ping通外部
/ # ping 192.168.1.201 PING 192.168.1.201 (192.168.1.201): 56 data bytes ping: sendto: Network is unreachable

none 网络使用场景一般比较少见,主要应用于安全性比较高的场景,且不需要与外部进行通信的任务。

2. host 网络 

        使用host网络的容器,会与宿主机共享网络栈,如共享ip与端口,优点:性能高,缺点:存在端口号冲突,多个容器无法对外暴露相同端口号。

容器使用host网络可以通过--net=host指定:如下所示:

给容器指定host网络模式
[root@VM_0_12_centos ~]# docker run -dit --net=host --name=nginx1 nginx 48e1df0c535193bfdef92f718de2c76427d88a371f14be1274022288cbe4ece6
可以通过宿主机ip与端口号访问容器应用。

[root@VM_0_12_centos ~]# telnet 172.26.0.12 80 Trying 172.26.0.12... Connected to 172.26.0.12. Escape character is '^]'.
在容器中可以看到 host 的所有网络设备

[root@VM_0_12_centos ~]# docker exec -it nginx1 ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 ether 02:42:29:18:39:27 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.26.0.12 netmask 255.255.240.0 broadcast 172.26.15.255 ether 52:54:00:5d:99:43 txqueuelen 1000 (Ethernet) RX packets 185960339 bytes 46982274554 (43.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 169979929 bytes 50689492409 (47.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 86 bytes 4827 (4.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 86 bytes 4827 (4.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

     从host网络与宿主机共享网络栈特性可以得出,host网络能够支持容器的跨主机通信,不通容器之间的通信其实就是不通宿主机之间的不通端口之间的通信。这种方式的很大的弊端就是存在端口号冲突,同一宿主机多个容器不能暴露同一端口号。其次,这种网络模式并没有充分发挥容器的隔离特性,容器与容器之间其实与宿主机共享网络栈。在大规模部署场景下,容器网络模式不会定义成host模式。

3. bridge

默认情况下,我们不指定--net时,创建的容器都是使用的是bridge网络模式。

在该模式下,每创建一个容器都会给容器分配自己的network namespace,如 ip地址。

每个容器都有自己的ip,那么同一台宿主机上这些容器之间需要通信,则需要借助网桥的设备。我们在安装docker时,默认情况下会创建一个docker0 linux bridge的虚拟网桥设备,我们创建的容器都会挂到docker0网桥上。

现在分别创建两个bridge网络的容器,如下:

[root@VM_0_12_centos ~]# docker run -dit  --name=bbox2 busybox          
7b4221300b296026126d3cf600db39bed68e4048729982d6e09100f59ec900b7

[root@VM_0_12_centos ~]# docker run -dit  --name=bbox1 busybox
91dd8c37571d69eacafe562f97a7c476f67a6189a0e58b4a95cc9a8f2ac013df
我们可以查看bridge网络的配置,子网分配,网关地址(docker0网桥)以及分配给每个容器的ip地址及mac地址,如下红色标注:
[root@VM_0_12_centos ~]# docker network inspect bridge [ { "Name": "bridge", "Id": "5b6d64e4b4433bb639b26a3f4a0e828ecb1b2a54984cb01225dd682322fa61d4", "Created": "2019-10-15T15:52:52.560070504+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "7b4221300b296026126d3cf600db39bed68e4048729982d6e09100f59ec900b7": { "Name": "bbox2", "EndpointID": "b3745248db52c2ea8e6eb7d05a5e581113105c4979717759d2d49bfebda2be49", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/16", "IPv6Address": "" }, "91dd8c37571d69eacafe562f97a7c476f67a6189a0e58b4a95cc9a8f2ac013df": { "Name": "bbox1", "EndpointID": "60b01cb330806b9fbbc9b212530b6561f7f56e697ac6ea1040f2e31419ed74d5", "MacAddress": "02:42:ac:11:00:04", "IPv4Address": "172.17.0.4/16", "IPv6Address": "" }, "f2f176f13894b434b4e7f6f6bcc68af1782559295d7ca430561eaf679d288deb": { "Name": "nginx1", "EndpointID": "d0f39a169126cd8222514eddfbba008a8e7f6727df8d49c25e19e35f6e53e191", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } },
通过ifconfig 可以看出,分别创建了两个以Veth开头的虚拟网卡

[root@VM_0_12_centos ~]# ifconfig docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 ether 02:42:29:18:39:27 txqueuelen 0 (Ethernet) RX packets 7 bytes 278 (278.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 452 (452.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.26.0.12 netmask 255.255.240.0 broadcast 172.26.15.255 ether 52:54:00:5d:99:43 txqueuelen 1000 (Ethernet) RX packets 185985778 bytes 46985652515 (43.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 170003246 bytes 50693831930 (47.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 86 bytes 4827 (4.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 86 bytes 4827 (4.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth7614922: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether be:a8:fb:ed:e5:e8 txqueuelen 0 (Ethernet) RX packets 5 bytes 378 (378.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 420 (420.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethe3d2ca0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether f2:ec:f2:ab:55:a1 txqueuelen 0 (Ethernet) RX packets 11 bytes 712 (712.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 830 (830.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
虚拟设备的一端的两个虚拟网卡都被挂载在docker0网桥上 [root@VM_0_12_centos
~]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.024229183927 no veth7614922 vethe3d2ca0

而虚拟设备的另外一端分别挂在了两个容器上,在容器内部为eth0虚拟网卡,与宿主机 veth虚拟网卡一一对应。

分别进入两个容器内部查看网络设备

[root@VM_0_12_centos ~]# docker exec -it bbox1 ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:04  ------------ (对应虚拟设备的另外一端)
inet addr:172.17.0.4 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:830 (830.0 B) TX bytes:712 (712.0 B)

 
 

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

 
 

[root@VM_0_12_centos ~]# docker exec -it bbox2 ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03   -------------(对应虚拟设备的另外一端
inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:420 (420.0 B) TX bytes:378 (378.0 B)

 
 

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

 

 可以看到两个容器内部都有自己的虚拟网卡eth0,mac地址,ip地址等。现在进入bbox1 来 ping bbox2的ip看是否能够ping通。

[root@VM_0_12_centos ~]# docker exec -it bbox1 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.087 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.081 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.063 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=4 ttl=64 time=0.082 ms
64 bytes from 172.17.0.3: seq=5 ttl=64 time=0.095 ms

猜你喜欢

转载自www.cnblogs.com/justinli/p/11679270.html