Docker NetWord (how the containers communicate)

Two environments, centos7 virtual hosts Docker1 and Docker2 with Docker installed:

Ensure that the two hosts can be pinged outside. / Etc / sysconfig / network-script / ifcfg-ens33 This file can be edited, 

[miller@docker4 network-scripts]$ cat ifcfg-ens33 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="b673928c-4590-4dd6-acc3-577f8312ef46"
DEVICE="ens33"
ONBOOT="yes"
IPV6_PRIVACY="no"
IPADDR=192.168.42.22
NETMASK=255.255.255.0
GATEWAY=192.168.42.2
DNS1=8.8.8.8
DNS2=114.114.114.114

Ping to check the reachability of ip

Telnet checks the availability of services.

[miller@docker4 network-scripts]$ telnet 192.168.42.23:5000
telnet: 192.168.42.23:5000: Name or service not known
192.168.42.23:5000: Unknown host
[miller@docker4 network-scripts]$ telnet 192.168.42.23 5000
Trying 192.168.42.23...
Connected to 192.168.42.23.
Escape character is '^]'. 
# The above situation shows that port 5000 of the 192.168.42.23 ip is reachable. Because I mapped the port of the flask app web application in the docker container to port 5000
# of this machine 192.168.42.23 .
[miller @ docker4 network
-scripts] $ telnet 192.168 . 42.23 500 Trying 192.168 . 42.23 ...
telnet: connect to address 192.168.42.23: Connection refused
# This is not working. Port 500 is not open

 

############################## Linux network name space network name space ############ #################

ip netns list # View the network name space of the current host

ip netns delete test1 # delete the network namespace of test1

ip netns add test1 # Add a network namespace named test1.

[miller @ docker4 python- flask] $ sudo ip netns add test1 # add a test network namespace
  
[miller @ docker4 python - flask] $ sudo ip netns exec test1 ip addr # through the exec command, execute ip in the test namespace addr this command
 1 : lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 # This loopback port is in DOWN state 
    link / loopback 00 : 00 : 00 : 00 : 00 : 00 brd 00 : 00 : 00 : 00 : 00 : 00


[miller @ docker4 python-flask] $ sudo ip netns exec test1 ip link set dev lo up # Start this loopback port
[miller @ docker4 python-flask] $ sudo ip netns exec test1 ip link
1: lo: <LOOPBACK, UP, LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link / loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00 # display result is UNKNOWN, want UP It needs to be connected at both ends. 
# Then connect the two network namespaces test1 and test2 on linux

 

##############################################################################################################################

[miller@docker4 python-flask]$ sudo ip netns list
test2
test1

[miller@docker4 python-flask]$ sudo ip netns exec test1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00


[miller@docker4 python-flask]$ sudo ip link add veth-test1 type veth peer name veth-test2  # 在主机添加一对 link
[miller@docker4 python-flask]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:00:d0:42 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:2d:ec:8c:ee brd ff:ff:ff:ff:ff:ff
9: veth0d99baa@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether ee:00:f8:b4:a5:d5 brd ff:ff:ff:ff:ff:ff link-netnsid 0

# 刚刚添加的两个link
10: veth-test2@veth-test1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 2e:d5:94:91:fd:97 brd ff:ff:ff:ff:ff:ff
11: veth-test1@veth-test2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 7e:fb:1b:d6:d9:6f brd ff:ff:ff:ff:ff:ff

 

# Add the pair of links you just added to the two network namespaces of test1 and test2 respectively

[miller@docker4 python-flask]$ sudo ip link set veth-test1 netns test1
[miller@docker4 python-flask]$ sudo ip link set veth-test2 netns test2

[miller@docker4 python-flask]$ sudo ip netns exec test1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
11: veth-test1@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 7e:fb:1b:d6:d9:6f brd ff:ff:ff:ff:ff:ff link-netnsid 1
[miller@docker4 python-flask]$ sudo ip netns exec test2 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
10: veth-test2@if11: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 2e:d5:94:91:fd:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0

 

# Assign an ip address of 192.168.1.1 to the veth-test1 interface in test1 with a mask of 24
# Assign an ip address of 192.168.1.2 to the interface veth-test1 in test2 with a mask of 24

[miller@docker4 python-flask]$ sudo ip netns exec test1 ip addr add 192.168.1.1/24 dev veth-test1
[miller@docker4 python-flask]$ sudo ip netns exec test2 ip addr add 192.168.1.2/24 dev veth-test2

 

# Start both interfaces of the two namespaces

[miller@docker4 python-flask]$ sudo ip netns exec test1 ip link set dev veth-test1 up
[miller@docker4 python-flask]$ sudo ip netns exec test2 ip link set dev veth-test1 up

 

# Both network namespaces have IP addresses

[miller@docker4 python-flask]$ sudo ip netns exec test2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
10: veth-test2@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2e:d5:94:91:fd:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.2/24 scope global veth-test2
valid_lft forever preferred_lft forever
inet6 fe80::2cd5:94ff:fe91:fd97/64 scope link
valid_lft forever preferred_lft forever
[miller@docker4 python-flask]$ sudo ip netns exec test1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
11: veth-test1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7e:fb:1b:d6:d9:6f brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 192.168.1.1/24 scope global veth-test1
valid_lft forever preferred_lft forever
inet6 fe80::7cfb:1bff:fed6:d96f/64 scope link
valid_lft forever preferred_lft forever

# 两边ping 一下。 都可以 通
[miller@docker4 python-flask]$ sudo ip netns exec test1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.072 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.078 ms
^C
--- 192.168.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.072/0.076/0.079/0.007 ms
[miller@docker4 python-flask]$ sudo ip netns exec test2 ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.076 ms
^C
--- 192.168.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.068/0.072/0.076/0.004 ms
[miller@docker4 python-flask]$

# The above principle is actually a principle that containers created by docker can communicate with each other.

Each container has its own network namespace.

Each namespace will open an interface, and assign IP address. The interface between the last two containers can be connected to communicate.

 

Docker NetWork network part:
            -Bridge NetWork
stand-alone--Host NetWork
            -None NetWork

multi-level-Overlay NetWork

docker network ls # List the networks that docker has on this machine

[miller@docker4 python-flask]$ docker ps   # 当前有一个叫 test1 的 docker container
CONTAINER ID   IMAGE                             COMMAND            CREATED          STATUS          PORTS        NAMES
3c1f23e3530e   caijiwandocker/flask-hello-world  "python3 app.py"   10 minutes ago   Up 10 minutes   5000/tcp     test1

 

[miller@docker4 python-flask]$ docker network ls
NETWORK        ID      NAME     DRIVER      SCOPE
d6e9516055df   bridge  bridge   local
2d4c88aac3be   host    host     local
8f43716b9791   none    null     local

 


[miller@docker4 python-flask]$ docker network inspect d6e9516055df [ { "Name": "bridge", "Id": "d6e9516055dfad8303ab9e5d10f42ea01f358cc6c691db04fee61d93e33bbecb", "Created": "2020-04-11T11:21:18.757306845+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "3c1f23e3530eadb54eb2fc61e7e33b36627428ba25c3b037c70c8d9a5fd9954a": { "Name ":" test1 ", # This is the name of the container connected to this network on this machine "EndpointID": "555d09be7c38f0107bfbd74ce1161836dc5a6bc043fbe0e61d5bd31f3b70e90a", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]

Through this you can see the container test1. Linked to docker's bridge network.

What is bridge? Bridge network

# The following is the ip information of this machine. The first two are not concerned, mainly the latter two.

[miller@docker4 python-flask]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    .....2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    ....3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:2d:ec:8c:ee brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2dff:feec:8cee/64 scope link 
       valid_lft forever preferred_lft forever
13: vethddd5c78@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether ca:7b:a3:98:58:2a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::c87b:a3ff:fe98:582a/64 scope link 
       valid_lft forever preferred_lft forever

Imagine that even if a computer wants to communicate, it needs a network cable link.
vethddd5c78 @ if12 This thing is equivalent to a network cable. One end is connected to the interface on the computer docker0. The other end is connected to an interface in the container test1.
[miller@docker4 python-flask]$ docker exec test1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[miller@docker4 python-flask]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:00:d0:42 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:2d:ec:8c:ee brd ff:ff:ff:ff:ff:ff
13: vethddd5c78@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether ca:7b:a3:98:58:2a brd ff:ff:ff:ff:ff:ff link-netnsid 0

 Just like this picture, veth ... acts like a network cable. Link the container started by docker0 and docker.

So two containers can communicate. Even if there is no direct veth link between the two containers. Instead, they are connected through docker0.

 

And docker0 wants to access the external network through routing address translation. but I do not know how.

 

Guess you like

Origin www.cnblogs.com/chengege/p/12679261.html