docker学习(五)—— docker容器网络

版权声明:如需转载,请注明作者及出处 https://blog.csdn.net/qq_33317586/article/details/85416882

 ip netns管理网络名称空间

查看帮助:

[root@docker2 ~]#  ip netns help
Usage: ip netns list
       ip netns add NAME
       ip netns set NAME NETNSID
       ip [-all] netns delete [NAME]
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...
       ip netns monitor
       ip netns list-id

创建两个名称空间:

[root@docker2 ~]# ip netns add r1
[root@docker2 ~]# ip netns add r2
[root@docker2 ~]# ip netns list
r2
r1

查看这两个ns的信息(网卡未激活需要加-a参数):

[root@docker2 ~]# ip netns exec r1 ifconfig
[root@docker2 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@docker2 ~]# ip netns exec r2 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

创建网卡对

查看帮助信息:

[root@docker2 ~]# ip link help
Usage: ip link add [link DEV] [ name ] NAME
                   [ txqueuelen PACKETS ]
                   [ address LLADDR ]
                   [ broadcast LLADDR ]
                   [ mtu MTU ] [index IDX ]
                   [ numtxqueues QUEUE_COUNT ]
                   [ numrxqueues QUEUE_COUNT ]
                   type TYPE [ ARGS ]

       ip link delete { DEVICE | dev DEVICE | group DEVGROUP } type TYPE [ ARGS ]

       ip link set { DEVICE | dev DEVICE | group DEVGROUP }
	                  [ { up | down } ]
	                  [ type TYPE ARGS ]
	                  [ arp { on | off } ]
	                  [ dynamic { on | off } ]
	                  [ multicast { on | off } ]
	                  [ allmulticast { on | off } ]
	                  [ promisc { on | off } ]
	                  [ trailers { on | off } ]
	                  [ carrier { on | off } ]
	                  [ txqueuelen PACKETS ]
	                  [ name NEWNAME ]
	                  [ address LLADDR ]
	                  [ broadcast LLADDR ]
	                  [ mtu MTU ]
	                  [ netns { PID | NAME } ]
	                  [ link-netnsid ID ]
			  [ alias NAME ]
	                  [ vf NUM [ mac LLADDR ]
				   [ vlan VLANID [ qos VLAN-QOS ] [ proto VLAN-PROTO ] ]
				   [ rate TXRATE ]
				   [ max_tx_rate TXRATE ]
				   [ min_tx_rate TXRATE ]
				   [ spoofchk { on | off} ]
				   [ query_rss { on | off} ]
				   [ state { auto | enable | disable} ] ]
				   [ trust { on | off} ] ]
				   [ node_guid { eui64 } ]
				   [ port_guid { eui64 } ]
			  [ xdp { off |
				  object FILE [ section NAME ] [ verbose ] |
				  pinned FILE } ]
			  [ master DEVICE ][ vrf NAME ]
			  [ nomaster ]
			  [ addrgenmode { eui64 | none | stable_secret | random } ]
	                  [ protodown { on | off } ]

       ip link show [ DEVICE | group GROUP ] [up] [master DEV] [vrf NAME] [type TYPE]

       ip link xstats type TYPE [ ARGS ]

       ip link afstats [ dev DEVICE ]

       ip link help [ TYPE ]

TYPE := { vlan | veth | vcan | dummy | ifb | macvlan | macvtap |
          bridge | bond | team | ipoib | ip6tnl | ipip | sit | vxlan |
          gre | gretap | ip6gre | ip6gretap | vti | nlmon | team_slave |
          bond_slave | ipvlan | geneve | bridge_slave | vrf | macsec }

创建一对网卡,第一段名字为veth1.1,类型为veth,另一段名字为veth1.2:

如下面5和6:

[root@docker1 ~]# ip link add name veth1.1 type veth peer name veth1.2
[root@docker1 ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:be:8f:21 brd ff:ff:ff:ff:ff:ff
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:be:8f:2b brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:3a:b1:0d:27 brd ff:ff:ff:ff:ff:ff
5: [email protected]: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:f9:d9:a2:a8:2b brd ff:ff:ff:ff:ff:ff
6: [email protected]: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 8a:9c:39:29:1c:6c brd ff:ff:ff:ff:ff:ff

把veth1.2放到r1网络名称空间中:

[root@docker1 ~]# ip link  set dev veth1.2 netns r1

再次ip link show

[root@docker1 ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:be:8f:21 brd ff:ff:ff:ff:ff:ff
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:be:8f:2b brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:3a:b1:0d:27 brd ff:ff:ff:ff:ff:ff
6: veth1.1@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 8a:9c:39:29:1c:6c brd ff:ff:ff:ff:ff:ff link-netnsid 0

ip netns exec r1 ifconfig

[root@docker1 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.2: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:f9:d9:a2:a8:2b  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

更改veth1.2网卡名字为eth0

[root@docker1 ~]# ip netns exec r1 ip link set dev veth1.2 name eth0
[root@docker1 ~]# ip netns exec r1 ifconfig -a
eth0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:f9:d9:a2:a8:2b  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

配置veth1.1的IP并激活它

[root@docker1 ~]# ifconfig veth1.1 10.1.0.1/24 up
[root@docker1 ~]# ifconfig veth1.1
veth1.1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.1.0.1  netmask 255.255.255.0  broadcast 10.1.0.255
        ether 8a:9c:39:29:1c:6c  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

给r1的eth0网卡配置IP并激活

[root@docker1 ~]# ip netns exec r1 ifconfig eth0 10.1.0.2/24 up
[root@docker1 ~]# ip netns exec r1 ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.0.2  netmask 255.255.255.0  broadcast 10.1.0.255
        inet6 fe80::50f9:d9ff:fea2:a82b  prefixlen 64  scopeid 0x20<link>
        ether 52:f9:d9:a2:a8:2b  txqueuelen 1000  (Ethernet)
        RX packets 8  bytes 648 (648.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 648 (648.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

在外面ping r1的ip

[root@docker1 ~]#  ping 10.1.0.2 -c3
PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
64 bytes from 10.1.0.2: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 10.1.0.2: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 10.1.0.2: icmp_seq=3 ttl=64 time=0.049 ms

--- 10.1.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.043/0.061/0.093/0.024 ms

把veth1.1移动到r2上

[root@docker1 ~]# ip link set dev veth1.1 netns r2
[root@docker1 ~]# ip netns exec r2 ifconfig 
[root@docker1 ~]# ip netns exec r2 ifconfig -a
lo: flags=8<LOOPBACK>  mtu 65536
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth1.1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 8a:9c:39:29:1c:6c  txqueuelen 1000  (Ethernet)
        RX packets 13  bytes 1026 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1026 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

给veth1.1配置IP

[root@docker1 ~]#  ip netns exec r2 ifconfig veth1.1 10.1.0.3/24 up
[root@docker1 ~]# ip netns exec r2 ifconfig 
veth1.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.0.3  netmask 255.255.255.0  broadcast 10.1.0.255
        inet6 fe80::889c:39ff:fe29:1c6c  prefixlen 64  scopeid 0x20<link>
        ether 8a:9c:39:29:1c:6c  txqueuelen 1000  (Ethernet)
        RX packets 13  bytes 1026 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19  bytes 1534 (1.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

在r2上ping r1


[root@docker1 ~]# ip netns exec r2 ping 10.1.0.2 -c2
PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
64 bytes from 10.1.0.2: icmp_seq=1 ttl=64 time=0.144 ms
64 bytes from 10.1.0.2: icmp_seq=2 ttl=64 time=0.040 ms

--- 10.1.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.040/0.092/0.144/0.052 ms

创建一个封闭式网络,只有lo接口

--network选项 参数为none

t1
[root@docker2 ~]# docker container run --name t1 --rm -it --network none busybox
/ # ifconfig -a
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

创建桥接式容器:

--network选项 参数为bridge

-h指定容器的主机名

[root@docker2 ~]# docker container run --name t1 --rm -it -h t1.uscwifi.cn --network bridge busybox
/ # hostname
t1.uscwifi.cn
/ # ifconfig -a
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:04  
          inet addr:172.17.0.4  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

创建容器时指定dns

--name指定容器名字

--rm指定退出容器时即删除容器

-it交互式方式进入

-h指定容器的hostname

--network指定容器的网络类型

--dns指定容器的dns

--dns-search指定容器dns的搜索域

[root@docker2 ~]# docker container run --name t1 --rm -it -h t1.uscwifi.cn --network bridge --dns 114.114.114.114 --dns-search uscwifi.cn busybox
/ # hostname
t1.uscwifi.cn
/ # cat /etc/resolv.conf 
search uscwifi.cn
nameserver 114.114.114.114
/ # 

创建容器时加入某条hosts解析:

使用--add-host

[root@docker2 ~]# docker container run --name t1 --rm -it -h t1.uscwifi.cn --network bridge --dns 114.114.114.114 --dns-search uscwifi.cn --add-host uscwifi.cn:1.1.1.1 busybox
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
1.1.1.1	uscwifi.cn
172.17.0.4	t1.uscwifi.cn t1

创建容器时暴露容器的端口:

  • -p <containerPort>
    • 将指定容器端口到物理机所有地址的一个动态端口
  • -p <hostPort>:<containerPort>
    • 将容器端口<containerPort>映射到指定的主机端口<hostPort>
  • -p <hostIP>::<containerPort>
  • -p <hostIP>:<hostPort>:<containerPort>

随机暴露

[root@docker2 ~]# docker container run --name t1 --rm -p 80 uscwifi/httpd:v0.2
...
新开shell
[root@docker2 ~]# docker inspect t1 | grep ipaddress
[root@docker2 ~]# docker inspect t1 | grep -i  ipaddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.4",
                    "IPAddress": "172.17.0.4",
[root@docker2 ~]# curl 172.17.0.4
<h1>Welcome to busybox!<h1>

可以使用iptables -t nat -vnL或者docker ps查看暴露的目标端口

[root@docker2 ~]# docker port t1
80/tcp -> 0.0.0.0:32769
[root@docker2 ~]# docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                   NAMES
cbb51165051f        uscwifi/httpd:v0.2   "/bin/httpd -f -h /d…"   5 minutes ago       Up 5 minutes        0.0.0.0:32768->80/tcp   t1
[root@docker2 ~]# curl localhost:32768
<h1>Welcome to busybox!<h1>

将容器指定端口暴露在物理机指定地址的一个动态端口

[root@docker2 ~]# docker container run --name t1 --rm -p 192.168.2.167::80 uscwifi/httpd:v0.2

新开shell,测试

[root@docker2 ~]# curl localhost:32768
curl: (7) Failed connect to localhost:32768; 拒绝连接
[root@docker2 ~]# curl 192.168.2.167:32768
<h1>Welcome to busybox!<h1>

将容器的80端口暴露在物理机所有地址的80端口

[root@docker2 ~]# docker container run --name t1 --rm -p 80:80 uscwifi/httpd:v0.2

新开shell,测试

t1
[root@docker2 ~]# ss -ltunp | grep :80
tcp    LISTEN     0      128      :::80                   :::*                   users:(("docker-proxy",pid=18460,fd=4))
[root@docker2 ~]# docker port t1
80/tcp -> 0.0.0.0:80
[root@docker2 ~]# curl localhost:80
<h1>Welcome to busybox!<h1>

同理,将容器80端口暴露在物理机指定地址的80端口:

[root@docker2 ~]# docker container run --name t1 --rm -p 192.168.2.167:80:80 uscwifi/httpd:v0.2

联盟式网络:

创建t1容器:

[root@docker2 ~]# docker container run --name t1 --rm  uscwifi/httpd:v0.2

新开shell,创建t2容器,指定网络为t1网络

[root@docker2 ~]# docker container run --name t2 --network container:t1 --rm -it  busybox

两个容器网卡IP一样:

[root@docker2 ~]# docker container run --name t2  --network container:t1 --rm -it  busybox
/ # ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:04  
          inet addr:172.17.0.4  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)
[root@docker2 ~]# docker container run --name t1 -h t1 --rm  -it busybox
/ # hostname
t1
/ # ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:04  
          inet addr:172.17.0.4  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)

创建开放式网络容器,容器与宿主机共享网络

[root@docker2 ~]# docker container run --name t2  --network host  --rm -it  busybox
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:2B:B1:12:FC  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:2bff:feb1:12fc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:67 errors:0 dropped:0 overruns:0 frame:0
          TX packets:80 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:7029 (6.8 KiB)  TX bytes:6200 (6.0 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:AB:C9:4B  
          inet addr:192.168.183.167  Bcast:192.168.183.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feab:c94b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:283 errors:0 dropped:0 overruns:0 frame:0
          TX packets:236 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:38585 (37.6 KiB)  TX bytes:33422 (32.6 KiB)
......

此时,直接运行nginx,就可以用物理机IP访问了


自定义docker0桥的网络属性信息

[root@docker1 ~]# vim /etc/docker/daemon.json 
[root@docker1 ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://7f28zkr3.mirror.aliyuncs.com"],
  "bip":"10.0.0.1/16"
}
[root@docker1 ~]# systemctl restart docker.service 
[root@docker1 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.0.0.1  netmask 255.255.0.0  broadcast 10.0.255.255
        ether 02:42:3a:b1:0d:27  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

dockerd守护进程的C/S,其默认仅监听Unix Socket格式的地址,/var/run/docker.sock,如果使用TCP套接字:

node1修改daemon.json文件:

[root@docker1 run]# vim /etc/docker/daemon.json 
[root@docker1 run]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://7f28zkr3.mirror.aliyuncs.com"],
  "bip":"10.0.0.1/16",
  "hosts":["tcp://0.0.0.0:2375","unix:///var/run/docker.sock"]
}
[root@docker1 run]# systemctl restart docker.service 
[root@docker1 run]# ss -ltunp | grep :2375
tcp    LISTEN     0      128      :::2375                 :::*                   users:(("dockerd",pid=14715,fd=5))

node2去连接 :

[root@docker2 ~]# docker -H 192.168.2.163:2375 ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@docker2 ~]# docker -H 192.168.2.163:2375 images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
uscwifi/httpd       v0.2                a83a2c1ac8b3        7 hours ago         1.15MB
uscwifi/httpd       v0.1-1              71e8e2f3a3a5        8 hours ago         1.15MB

docker网络的操作

帮助信息:

[root@docker2 ~]# docker network --help

Usage:	docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.

docker info中的网络类型:

[root@docker2 ~]# docker info | grep -i network
 Network: bridge host macvlan null overlay

创建自定义的桥:

[root@docker1 ~]# docker network create --help

Usage:	docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
      --config-from string   The network from which copying the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment

创建一个自定义的docker桥,子网为172.26.0.1/24,网关为172.16.0.1,名字为mybr0

[root@docker1 ~]# docker network create --subnet 172.26.0.0/24 --gateway 172.26.0.1 mybr0
a11c8e6fbeacc9d3260e84eca3408f6ae43e60c5130a21fe92fbebd0e4b5d587
[root@docker1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d0cc6c81036a        bridge              bridge              local
3dcb447c7eaf        host                host                local
a11c8e6fbeac        mybr0               bridge              local
70355c5e8a7a        none                null                local

使用mybr0创建一个容器:

[root@docker1 ~]# docker container run -it --rm --network mybr0 busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
b4a6e23922dd: Pull complete 
Digest: sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796
Status: Downloaded newer image for busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:1A:00:02  
          inet addr:172.26.0.2  Bcast:172.26.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1086 (1.0 KiB)  TX bytes:0 (0.0 B)

使用bridge桥创建第二个容器:

[root@docker1 ~]# docker container run -it --rm --name t2 --network bridge busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02  
          inet addr:10.0.0.2  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1016 (1016.0 B)  TX bytes:0 (0.0 B)

容器t1要和t2通信,打开核心路由转发,并:

参考:https://cloud.tencent.com/developer/article/1139755

猜你喜欢

转载自blog.csdn.net/qq_33317586/article/details/85416882
今日推荐