Docker的四种网络模式,高级网络配置(自定义网络配置),网络混杂模式(macvlan),两台主机容器间通信,容器之间实现数据共享

[root@foundation60 kiosk]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
01435b253636        bridge              bridge              local
c72e381b2201        host                host                local
9fc7fdced619        none                null                local
docker network ls 命令用于列出网络。列出引擎守护进程知道的所有网络。 这包括跨群集中多个主机的网络。

docker 四种网络模型
Docker在创建容器时有四种网络模式,bridge为默认不需要用–net去指定,其他三种模式需要在创建容器时使用–net去指定。

bridge模式,使用–net=bridge指定,默认设置。
none模式,使用–net=none指定。
host模式,使用–net=host指定。
container模式,使用–net=container:容器名称或ID指定

一:bridge模式

bridge模式:docker网络隔离基于网络命名空间,在物理机上创建docker容器时会为每一个docker容器分配网络命名空间,并且把容器IP桥接到物理机的虚拟网桥上。
none模式:此模式下创建容器是不会为容器配置任何网络参数的,如:容器网卡、IP、通信路由等,全部需要自己去配置


1:创建容器
[root@foundation60 kiosk]# docker run -it --name vm1 ubuntu   
root@1332768bb792:/# ip addr   ##查看网络,可以查看走的是eth0的桥接 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
ctrl pq退出,
[root@foundation60 kiosk]# docker ps 此时有两个容器开启
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                            NAMES
1332768bb792        ubuntu              "/bin/bash"              3 minutes ago       Up 3 minutes                                         vm1
cdcb915a6b08        registry            "/bin/registry /etc/…"   47 hours ago        Up 6 minutes        0.0.0.0:443->443/tcp, 5000/tcp   registry

[root@foundation60 kiosk]# brctl show 此时查看到会有个docker0,因为此时有两个容器在运行着,所以后面跟这两个
bridge name	bridge id		STP enabled	interfaces
br0		8000.d481d7920d84	no		p9p1
docker0		8000.0242d6f8bb0c	no		veth76f2b3b
							vethedea09c
virbr0		8000.525400c1746a	yes		virbr0-nic
virbr1		8000.525400ae401a	yes		virbr1-nic

[root@foundation60 kiosk]# ip addr 本机中查看,每个容器都有对应的ip
9: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:d6:f8:bb:0c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d6ff:fef8:bb0c/64 scope link 
       valid_lft forever preferred_lft forever
11: veth76f2b3b@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 0e:5e:8d:07:82:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::c5e:8dff:fe07:8245/64 scope link 
       valid_lft forever preferred_lft forever
13: vethedea09c@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 06:c0:14:36:8b:79 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::4c0:14ff:fe36:8b79/64 scope link 
       valid_lft forever preferred_lft forever

2:过滤容器的pid
[root@foundation60 Desktop]# docker inspect vm1 | grep Pid
            "Pid": 3545,
            "PidMode": "",
            "PidsLimit": 0,
3:查看进程
[root@foundation60 Desktop]# ps ax
 3545 pts/0    Ss+    0:00 /bin/bash ##本机开启了相应的进程
4:查看该进程所有的文件
[root@foundation60 Desktop]# cd /proc/3
3/    305/  307/  31/   311/  3160/ 32/   335/  338/  35/   3545/ 398/  
30/   306/  308/  310/  3153/ 3178/ 33/   337/  34/   3527/ 3674/ 399/  
[root@foundation60 Desktop]# cd /proc/3545/
[root@foundation60 3545]# ls
attr             cpuset   limits      net            projid_map  stat
autogroup        cwd      loginuid    ns             root        statm
auxv             environ  map_files   numa_maps      sched       status
cgroup           exe      maps        oom_adj        schedstat   syscall
clear_refs       fd       mem         oom_score      sessionid   task
cmdline          fdinfo   mountinfo   oom_score_adj  setgroups   timers
comm             gid_map  mounts      pagemap        smaps       uid_map
coredump_filter  io       mountstats  personality    stack       wchan
[root@foundation60 ns]# ls
ipc  mnt  net  pid  user  uts
[root@foundation60 ns]# ll
total 0
lrwxrwxrwx 1 root root 0 3月  19 18:14 ipc -> ipc:[4026532507]
lrwxrwxrwx 1 root root 0 3月  19 18:14 mnt -> mnt:[4026532505]
lrwxrwxrwx 1 root root 0 3月  19 18:13 net -> net:[4026532510]
lrwxrwxrwx 1 root root 0 3月  19 18:14 pid -> pid:[4026532508]
lrwxrwxrwx 1 root root 0 3月  19 18:14 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 3月  19 18:14 uts -> uts:[4026532506]

二:host模式,使用–net=host指定。

host模式:此模式创建的容器没有自己独立的网络命名空间,是和物理机共享一个Network Namespace,并且共享物理机的所有端口与IP,并且这个模式认为是不安全的。

[root@foundation60 ns]# docker run -it --name  vm2 --net host ubuntu ##加入--net host参数,此时该容器就和真机共用一个网络
root@foundation60:/# ip addr  ##可以查看的和之前开启的容器存在明显擦差异,这个容器和真机一样,共用一个网络

6:关闭httpd服务,生成nginx服务镜像
[root@foundation60 ns]# docker rm vm2 
vm2
[root@foundation60 ns]# systemctl stop httpd.service 
[root@foundation60 ns]# docker images nginx
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               latest              881bd08c0b08        2 weeks ago         109MB
利用nginx服务生成vm2容器(指定为host模式),此时在真机浏览器中输入本机ip,可以查看到nginx服务,此时就就可以知道,vm2容器和真机共用网络
[root@foundation60 ns]# docker run -d --name vm2 --net host nginx   
feb87d12cdec586cd9e2099bcab8c8ccfc53b24c5fb49174bfbb02ac8187c669
[root@foundation60 ns]# docker ps 
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                            NAMES
feb87d12cdec        nginx               "nginx -g 'daemon of…"   8 seconds ago       Up 7 seconds                                         vm2
1332768bb792        ubuntu              "/bin/bash"              About an hour ago   Up About an hour                                     vm1
cdcb915a6b08        registry            "/bin/registry /etc/…"   2 days ago          Up About an hour    0.0.0.0:443->443/tcp, 5000/tcp   registry

三:none模式,使用–net=none指定。

none模式:此模式下创建容器是不会为容器配置任何网络参数的,如:容器网卡、IP、通信路由等,全部需要自己去配置。

[root@foundation60 ns]# docker run -it --name vm3 --net none ubuntu
root@ac2a3955e30a:/# ip addr 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
root@ac2a3955e30a:/# 


常用的namespace的命令
添加:
[root@foundation60 run]# ip netns add test
[root@foundation60 run]# ip netns list
test
[root@foundation60 run]# cd net
netns/     netreport/ 
[root@foundation60 run]# cd netns/
[root@foundation60 netns]# ls
test
删除
[root@foundation60 netns]# ip netns del test
[root@foundation60 netns]# ip netns list
[root@foundation60 netns]# pwd
/var/run/netns

手动给none模式下的容器添加ip
(1):将veth设备一端接入ovs网桥br-int中
[root@foundation60 netns]#  ip link add veth0 type veth peer name veth1
(2):[root@foundation60 netns]# ip addr ##此时都处于down状态
14: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether d2:f4:a1:b2:71:07 brd ff:ff:ff:ff:ff:ff
15: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN qlen 1000
(3):开启
[root@foundation60 netns]# ip link set up veth0
[root@foundation60 netns]# ip link set up veth1
14: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether d2:f4:a1:b2:71:07 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d0f4:a1ff:feb2:7107/64 scope link 
       valid_lft forever preferred_lft forever
15: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 7a:89:f8:81:82:85 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7889:f8ff:fe81:8285/64 scope link 
       valid_lft forever preferred_lft forever
(4)brctl命令查看接口
[root@foundation60 netns]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.d481d7920d84	no		p9p1
docker0		8000.0242d6f8bb0c	no		veth76f2b3b
							vethedea09c
virbr0		8000.525400c1746a	yes		virbr0-nic
virbr1		8000.525400ae401a	yes		virbr1-nic
(5):将veth0加入网桥
[root@foundation60 netns]# brctl addif docker0 veth0
[root@foundation60 netns]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.d481d7920d84	no		p9p1
docker0		8000.0242d6f8bb0c	no		veth0
							veth76f2b3b
							vethedea09c
virbr0		8000.525400c1746a	yes		virbr0-nic
virbr1		8000.525400ae401a	yes		virbr1-nic
(6):查看vm3Pid
[root@foundation60 netns]# docker inspect vm3 | grep Pid
            "Pid": 8945,
            "PidMode": "",
            "PidsLimit": 0,
(7):制作软连接
[root@foundation60 netns]# cd /proc/8945/
[root@foundation60 8945]# ls
attr             cpuset   limits      net            projid_map  stat
autogroup        cwd      loginuid    ns             root        statm
auxv             environ  map_files   numa_maps      sched       status
cgroup           exe      maps        oom_adj        schedstat   syscall
clear_refs       fd       mem         oom_score      sessionid   task
cmdline          fdinfo   mountinfo   oom_score_adj  setgroups   timers
comm             gid_map  mounts      pagemap        smaps       uid_map
coredump_filter  io       mountstats  personality    stack       wchan
[root@foundation60 8945]# cd ns/
[root@foundation60 ns]# ls
ipc  mnt  net  pid  user  uts
[root@foundation60 ns]# ll
total 0
lrwxrwxrwx 1 root root 0 3月  19 20:21 ipc -> ipc:[4026532624]
lrwxrwxrwx 1 root root 0 3月  19 20:21 mnt -> mnt:[4026532622]
lrwxrwxrwx 1 root root 0 3月  19 19:36 net -> net:[4026532627]
lrwxrwxrwx 1 root root 0 3月  19 20:21 pid -> pid:[4026532625]
lrwxrwxrwx 1 root root 0 3月  19 20:21 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 3月  19 20:21 uts -> uts:[4026532623]
[root@foundation60 ns]# ln -s /proc/8945/ns/net /var/run/netns/8945
[root@foundation60 ns]# ip netns list
8945

(8):将veth1连接容器网络
[root@foundation60 ns]# ip link set veth1 netns 8945
(9):veth1改名为eth0
[root@foundation60 ns]# ip netns exec 8945 ip link set veth1 name eth0
(10):启用eth0设备
[root@foundation60 ns]# ip netns exec 8945 ip link set up dev eth0
(11): 在namespace 中指定设备设置 ip
[root@foundation60 ns]# ip netns exec 8945 ip addr add 172.17.0.100/24 dev eth0

root@ac2a3955e30a:/# ip addr    ###ip设置成功
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:f4:a1:b2:71:07 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.100/24 scope global eth0
       valid_lft forever preferred_lft forever
测试:
root@ac2a3955e30a:/# ping 172.25.0.2
connect: Network is unreachable
root@ac2a3955e30a:/# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.076 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.051 ms
^C
--- 172.17.0.2 ping statistics ---

(12)设置网关,是vm3能上网
添加网关
[root@foundation60 ns]# ip netns exec 8945 ip route add default via 172.17.0.1
root@ac2a3955e30a:/# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
root@ac2a3955e30a:/# ping www.baidu.com
PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
64 bytes from 61.135.169.125: icmp_seq=1 ttl=50 time=22.5 ms
64 bytes from 61.135.169.125: icmp_seq=2 ttl=50 time=23.9 ms
64 bytes from 61.135.169.125: icmp_seq=3 ttl=50 time=27.5 ms
64 bytes from 61.135.169.125: icmp_seq=4 ttl=50 time=23.5 ms
64 bytes from 61.135.169.125: icmp_seq=5 ttl=50 time=23.1 ms
64 bytes from 61.135.169.125: icmp_seq=6 ttl=50 time=23.3 ms
64 bytes from 61.135.169.125: icmp_seq=7 ttl=50 time=23.3 ms
^C
--- www.a.shifen.com ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6008ms
rtt min/avg/max/mdev = 22.575/23.932/27.507/1.522 ms

此时我们的宿主机和容器的网络是相通的,如果宿主机可以连接外网,我们容器也可以访问外网

四: Container 网络模式:

(1) 查找 other container(即需要被共享网络环境的容器)的网络 namespace;
(2) 将新创建的 Docker Container(也是需要共享其他网络的容器)的 namespace,使用other container 的 namespace。
Docker Container 的 other container 网络模式,可以用来更好的服务于容器间的通信。在这种模式下的 Docker Container 可以通过 localhost 来访问 namespace 下的其他容器,传输效率较高。虽然多个容器共享网络环境,但是多个容器形成的整体依然与宿主机以及其他容器形成网络隔离。另外,这种模式还节约了一定数量的网络资源。但是需要注意的是,它并没有改善容器与宿主机以外世界通信的情况。


(1)生成vm1
[root@foundation60 ~]# docker run -it --name vm1 ubuntu
root@ee68cf3c6dec:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
(2)使用Container 网络模式生产vm2
[root@foundation60 ns]# docker run -it --name vm2 --net container:vm1 ubunturoot@ee68cf3c6dec:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
此时发现vm1 vm2的网络配置一样:
此模式和host模式很类似,只是此模式创建容器共享的是其他容器的IP和端口而不是物理机,此模式容器自身是不会配置网络和端口,创建此模式容器进去后,你会发现里边的IP是你所指定的那个容器IP并且端口也是共享的,而且其它还是互相隔离的,如进程等。

高级网络配置

自定义网络配置,docker提供了三种自定义网络驱动:
brideg
overlay
macvlan

brideg驱动类似默认的bridge网络模式,但增加了一些新的功能,
overlay和macvlan是用于创建跨主机网络
建议使用自定义的网络来控制那些容器可以互相通信,还可以自动DNS解析容器名称到ip地址
 

1:创建自定义网桥my_net1
[root@foundation60 ~]# docker network create --driver bridge my_net1

2:利用my_net1生成容器vm1
[root@foundation60 ~]# docker run -it --name vm1 --net my_net1 ubuntu

3:查看ip ##发现是18网段的
root@720c14dfd688:/#  ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever

4:此时如果不使用自定义网络的话,接着生成my_net2会根据单调递增模式,my_net2会是19网段的,此此时我不想要19网段的,我自定义为20网段的
[root@foundation60 ~]# docker network create --driver bridge --subnet 172.20.0.0/24 --gateway 172.20.0.1 my_net2
[root@foundation60 ~]# docker network inspect my_net2 
	    "Subnet": "172.20.0.0/24",
            "Gateway": "172.20.0.1"


5:字定义vm2 ip(设置为172.20.0.10)如果不自定义话,会是172.20.0.1
[root@foundation60 ~]# docker run -it --name vm2 --net my_net2 --ip 172.20.0.10 ubuntu
root@674244f1166e:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:14:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.10/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
6:测试,ping自己的网关是可以的
root@674244f1166e:/# ping 172.20.0.1
PING 172.20.0.1 (172.20.0.1) 56(84) bytes of data.
64 bytes from 172.20.0.1: icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from 172.20.0.1: icmp_seq=2 ttl=64 time=0.068 ms
^C
--- 172.20.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.063/0.065/0.068/0.008 ms


ping网关是可以的
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
root@674244f1166e:/# ping 172.18.0.1
PING 172.18.0.1 (172.18.0.1) 56(84) bytes of data.
64 bytes from 172.18.0.1: icmp_seq=1 ttl=64 time=0.049 ms
64 bytes from 172.18.0.1: icmp_seq=2 ttl=64 time=0.045 ms
^C
--- 172.18.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.045/0.047/0.049/0.002 ms
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&




但是ping vm1就不可以
root@674244f1166e:/# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
^C
--- 172.18.0.2 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

7:将vm2搭接在my_net1上
[root@foundation60 kiosk]# docker network connect my_net1 vm2

8:此时查看vm2的ip对发现多了一个18网段的,则说明此时vm1和vm2可以实现通信了
root@674244f1166e:/# ip  addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:14:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.10/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
18: eth1@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
root@674244f1166e:/# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.155 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.053 ms
^C
--- 172.18.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.053/0.104/0.155/0.051 ms


[root@foundation60 Desktop]# docker network ls   ##可以查看到
NETWORK ID          NAME                DRIVER              SCOPE
83b7b410b788        bridge              bridge              local
c72e381b2201        host                host                local
65a3298b0397        my_net1             bridge              local
a225b419c576        my_net2             bridge              local
9fc7fdced619        none                null                local

macvlan:
网络混杂模式,实现不同主机间容器的通信

server1主机上
1:创建macvlan网络模式
[root@server1 ~]# docker network create -d macvlan --subnet 172.25.1.0/24 --gateway 172.25.1.1 -o parent=eth0 mac_net1
0e90fe9753523cd1318df506b4917a130716b52dd54cd2b62c48c87e4a0515fa
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
707231be2dcc        bridge              bridge              local
21aeb8ed91db        host                host                local
0e90fe975352        mac_net1            macvlan             local
e2363758dfbb        none                null 

2:生成容器vm1
[root@server1 ~]# docker run -it --name vm1 --net mac_net1 --ip 172.25.1.10 ubuntu


3:查看ip(172.25.1.10)
[root@server1 ~]# docker run -it --name vm1 --net mac_net1 --ip 172.25.1.10 ubuntu
root@2bf3080aaef1:/# ip  addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:19:01:0a brd ff:ff:ff:ff:ff:ff
    inet 172.25.1.10/24 brd 172.25.1.255 scope global eth0
       valid_lft forever preferred_lft forever



server2主机:(vm1ip为172.25.1.11)
[root@server2 ~]# docker network create -d macvlan --subnet 172.25.1.0/24 --gateway 172.25.1.1 -o parent=eth0 mac_net1
18a5df06f46aae68285fb250634c5d0cc9f56f6d978f44c4ac09c32af02494b0
[root@server2 ~]# docker run -it --name vm1 --net mac_net1 --ip 172.25.1.11 ubuntu
root@0f1bd4ce2371:/# ip  addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:19:01:0b brd ff:ff:ff:ff:ff:ff
    inet 172.25.1.11/24 brd 172.25.1.255 scope global eth0
       valid_lft forever preferred_lft forever

5:测试server2主机上vm1容器ping server1主机上vm1容器
root@0f1bd4ce2371:/# ping 172.25.1.10
PING 172.25.1.10 (172.25.1.10) 56(84) bytes of data.
64 bytes from 172.25.1.10: icmp_seq=1 ttl=64 time=0.611 ms
64 bytes from 172.25.1.10: icmp_seq=2 ttl=64 time=0.440 ms
^C
--- 172.25.1.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.440/0.525/0.611/0.088 ms
发现不同主机间的容器也可以进行通信

 

添加网卡实现通信

给server1和server2都添加网卡eth1
1:开启eth1,使其状态为up
[root@server1 ~]# ip link set up eth1
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:1e:af:a5 brd ff:ff:ff:ff:ff:ff

2:打开eth1的混杂模式
[root@server1 ~]#  ip addr set eth1 promisc on

3:创建macvlan网络模式

[root@server1 ~]# docker network create -d macvlan --subnet 172.25.2.0/24 --gateway 172.25.2.1 -o parent=eth1 mac_net2
06e3ac27f2bc2af700add02d84f9e822bdf9a027bc3bfd455782336b56fa3c3a

3:生成容器
[root@server1 ~]# docker run -it --name vm2 --net mac_net2 --ip 172.25.2.10 ubuntu
root@75ae6ab7f844:/# 

4:查看ip
root@75ae6ab7f844:/# ip addr 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:19:02:0a brd ff:ff:ff:ff:ff:ff
    inet 172.25.2.10/24 brd 172.25.2.255 scope global eth0
       valid_lft forever preferred_lft forever

server2主机:
7:开启混杂模式
[root@server2 ~]# ip link set eth1 promisc on

8:创建macvlan网络模式
[root@server2 ~]# docker network create -d macvlan --subnet 172.25.2.0/24 --gateway 172.25.2.1 -o parent=eth1 mac_net2
d65a050ea83a7933d4fa3fe732dbb96c1421c204fbe4c20cbe5ba0e815cccee2

9:创建容器 查看ip
[root@server2 ~]# docker run -it --name vm2 --net mac_net2 --ip 172.25.2.11 ubuntu
root@b8c2cad11811:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:19:02:0b brd ff:ff:ff:ff:ff:ff
    inet 172.25.2.11/24 brd 172.25.2.255 scope global eth0
       valid_lft forever preferred_lft forever
root@b8c2cad11811:/# ping 172.25.2.10

测试:能和setver1上的vm1实现通信
root@b8c2cad11811:/# ping 172.25.2.10
PING 172.25.2.10 (172.25.2.10) 56(84) bytes of data.
64 bytes from 172.25.2.10: icmp_seq=1 ttl=64 time=1.36 ms
^C
--- 172.25.2.10 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.368/1.368/1.368/0.000 ms

一块网卡上添加多个ip实现容器通信

macvlan会独占主机网卡,但可以使用vlan子接口实现多macvlan网络
vlan可以将物理二层网罗划分为4094个逻辑网络,彼此隔离vlan id取值为1-4094

server1: ####eth1.1 相当于开了一个子接口
1:
[root@server1 ~]# docker network create -d macvlan --subnet 172.25.3.0/24 --gateway 172.25.3.1 -o parent=eth1.1 mac_net3
d16cd2cc5f88362de1e18e098efdc9931de481afa20adee6df345025c761e236
2:生成容器
[root@server1 ~]# docker run -it --name vm3 --net mac_net3 --ip 172.25.3.10 ubuntu
root@0df3b1239b13:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:19:03:0a brd ff:ff:ff:ff:ff:ff
    inet 172.25.3.10/24 brd 172.25.3.255 scope global eth0
       valid_lft forever preferred_lft forever

server2:
1:
[root@server2 ~]# docker network create -d macvlan --subnet 172.25.3.0/24 --gateway 172.25.3.1 -o parent=eth1.1 mac_net3
a2182c6165fb6f156a9a51aadbf6ee119a828cf90a9bff9ca142fb3a183beb80

2:生成容器
[root@server2 ~]# docker run -it --name vm3 --net mac_net3 --ip 172.25.3.11 ubuntu
root@20de3a292c2e:/# ip addr 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:19:03:0b brd ff:ff:ff:ff:ff:ff
    inet 172.25.3.11/24 brd 172.25.3.255 scope global eth0
       valid_lft forever preferred_lft forever

测试;
root@0df3b1239b13:/# ping 172.25.3.11
PING 172.25.3.11 (172.25.3.11) 56(84) bytes of data.
64 bytes from 172.25.3.11: icmp_seq=1 ttl=64 time=0.672 ms
^C
--- 172.25.3.11 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms
可以实现通信

容器之间实现数据共享

数据卷的挂载

1:安装nfs
[root@server1 ~]# yum install nfs-utils -y
2:开启服务
[root@server1 ~]# systemctl start nfs

3:开启rpcbid
[root@server1 ~]# systemctl start rpcbind
[root@server1 ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
   Active: active (running) since 四 2019-03-21 18:00:44 CST; 5s ago
  Process: 2772 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 2773 (rpcbind)
   Memory: 688.0K
   CGroup: /system.slice/rpcbind.service
           └─2773 /sbin/rpcbind -w

4:建立目录
[root@server1 ~]# mkdir /mnt/nfs
5:编辑文件
[root@server1 ~]# vim /etc/exports
/mnt/nfs	*(rw,no_root_squash)

6:加载
[root@server1 ~]# exportfs -rv
exporting *:/mnt/nfs
[root@server1 ~]# showmount -e 172.25.60.1
Export list for 172.25.60.1:
/mnt/nfs *


srevr2上
7:安装nfs
[root@server2 ~]# yum install nfs-utils -y
8:开启服务
[root@server2 ~]# systemctl start nfs
9:开启rpcbind
[root@server2 ~]# systemctl start rpcbind
[root@server2 ~]# systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
   Active: active (running) since 四 2019-03-21 18:00:11 CST; 5s ago
  Process: 2726 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 2727 (rpcbind)
   Memory: 596.0K
   CGroup: /system.slice/rpcbind.service
           └─2727 /sbin/rpcbind -w

10:建立目录
[root@server2 ~]# mkdir /mnt/nfs
11:挂载
[root@server2 ~]# mount 172.25.60.1:/mnt/nfs/ /mnt/nfs/
[root@server2 ~]# df
Filesystem            1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root  17811456 3619392  14192064  21% /
devtmpfs                 239252       0    239252   0% /dev
tmpfs                    250224       0    250224   0% /dev/shm
tmpfs                    250224    4540    245684   2% /run
tmpfs                    250224       0    250224   0% /sys/fs/cgroup
/dev/sda1               1038336  142884    895452  14% /boot
tmpfs                     50048       0     50048   0% /run/user/0
172.25.60.1:/mnt/nfs   17811456 2524032  15287424  15% /mnt/nfs

11:测试
server1上创建文文件
[root@server1 nfs]# touch file
server2上能看到
[root@server2 nfs]# ls
file
server2上创建文件
[root@server2 nfs]# touch file1
servre1上能查看到

可知nfs文件共享系统搭建完毕

12:解压压缩包
[root@server1 ~]# tar zxf convoy.tar.gz 

13:拷贝配置文件到/usr/local/bin/目录下
[root@server1 ~]# cd convoy 
[root@server1 convoy]# ls
convoy  convoy-pdata_tools  SHA1SUMS
[root@server1 convoy]# cp convoy* /usr/local/bin/
[root@server1 convoy]# cd /usr/local/bin/
[root@server1 bin]# ls
convoy  convoy-pdata_tools

14:进入docker目录下,建立liugins目录
[root@server1 bin]# cd /etc/docker/
[root@server1 docker]# ls
certs.d  key.json
[root@server1 docker]# mkdir plugins

15:
[root@server1 docker]# convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/nfs/ &> /etc/null &
[1] 3100
[root@server1 docker]# ps ax
 3100 pts/0    Sl     0:00 convoy daemon --drivers vfs --driver-opts vfs.path=/m


17:查看文件共享目录
[root@server1 docker]# cd /mnt/nfs/
[root@server1 nfs]# ls
config  file  file1

[root@server1 nfs]# cd /var/run/
[root@server1 run]# ls
auditd.pid       faillock       lvmetad.pid     rpcbind        tmpfiles.d
console          gssproxy.pid   mariadb         rpcbind.lock   tuned
convoy           gssproxy.sock  mdadm           rpcbind.sock   udev
crond.pid        httpd          mount           rpc.statd.pid  user
cron.reboot      initramfs      netreport       sepermit       utmp
dbus             ksmtune.pid    NetworkManager  setrans        xtables.lock
dmeventd-client  libvirt        plymouth        sm-notify.pid  zabbix
dmeventd-server  libvirtd.pid   ppp             sshd.pid
docker           lock           radvd           sysconfig
docker.pid       log            rhnsd.pid       syslogd.pid
docker.sock      lvm            rhsm            systemd

18:导入文件
[root@server1 run]# cd convoy/
[root@server1 convoy]# ls
convoy.sock
[root@server1 convoy]# pwd
/var/run/convoy
[root@server1 convoy]# vim convoy.sock 
[root@server1 convoy]# cd /etc/docker/plugins/
[root@server1 plugins]#  echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec

19:查看convoy列表
[root@server1 plugins]# convoy list
{
	"voll": {
		"Name": "voll",
		"Driver": "vfs",
		"MountPoint": "",
		"CreatedTime": "Thu Mar 21 18:30:21 +0800 2019",
		"DriverInfo": {
			"Driver": "vfs",
			"MountPoint": "",
			"Path": "/mnt/nfs/voll",
			"PrepareForVM": "false",
			"Size": "0",
			"VolumeCreatedAt": "Thu Mar 21 18:30:21 +0800 2019",
			"VolumeName": "voll"
		},
		"Snapshots": {}

	}
}


20:server2上
[root@server2 ~]# tar zxf convoy.tar.gz 
[root@server2 convoy]# cp convoy* /usr/local/bin/
[root@server2 convoy]# ls
convoy  convoy-pdata_tools  SHA1SUMS
[root@server2 convoy]# mkdir /etc/docker/plugins
[root@server2 convoy]# convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/nfs/ &> /etc/null &
[1] 4175
[root@server2 convoy]# cd /etc/docker/plugins/
[root@server2 plugins]# echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec
[root@server2 plugins]# ls
convoy.spec
[root@server2 plugins]# cat convoy.spec 
unix:///var/run/convoy/convoy.sock
[root@server2 plugins]# ll /var/run/convoy/convoy.sock 
srwxr-xr-x 1 root root 0 3月  21 18:28 /var/run/convoy/convoy.sock
[root@server2 plugins]# convoy list
{
	"voll": {
		"Name": "voll",
		"Driver": "vfs",
		"MountPoint": "",
		"CreatedTime": "Thu Mar 21 18:30:21 +0800 2019",
		"DriverInfo": {
			"Driver": "vfs",
			"MountPoint": "",
			"Path": "/mnt/nfs/voll",
			"PrepareForVM": "false",
			"Size": "0",
			"VolumeCreatedAt": "Thu Mar 21 18:30:21 +0800 2019",
			"VolumeName": "voll"
		},
		"Snapshots": {}
	}
}



20:server1上
[root@server1 plugins]# cd /mnt/nfs/  此时查看到文件共享目录有了voll
[root@server1 nfs]# ls
config  file  file1  voll
21:生成容器
[root@server1 nfs]# docker run -it --name vm1 -v voll:/data ubuntu
root@1aef210e6100:/# cd data/
root@1aef210e6100:/data# ls
root@1aef210e6100:/data# touch file{1..10}  ##建立文件
root@1aef210e6100:/data# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9

21:在server2的文件共享目录下能看到
[root@server2 plugins]# cd /mnt/nfs/voll/
[root@server2 voll]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9


22:server2登陆容器也可以执行写操作
[root@server1 nfs]# docker run -it --name vm1 -v voll:/data ubuntu
root@fb8f5641482c:/# cd data/
root@fb8f5641482c:/data# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9
root@fb8f5641482c:/data# rm -rf *
root@fb8f5641482c:/data# ls
root@fb8f5641482c:/data# 







 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

猜你喜欢

转载自blog.csdn.net/yinzhen_boke_0321/article/details/88750105