Detailed explanation of Docker container network configuration
I bought a cloud server on cnaaa.
docker container networking
Docker automatically provides 3 types of networks after installation, you can use the docker network ls command to view
[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE cd97bb997b84 bridge bridge local 0a04824fc9b6 host host local 4dcb8fbdb599 none null local
Docker uses Linux bridge to virtualize a Docker container bridge (docker0) on the host machine. When Docker starts a container, it will assign an IP address to the container according to the network segment of the Docker bridge, which is called Container-IP. At the same time, the Docker bridge is each The default gateway for each container. Because the containers in the same host are connected to the same bridge, the containers can communicate directly through the Container-IP of the container.
4 network modes of docker
network mode | configuration | illustrate |
---|---|---|
host | --network host | The container and the host share the Network namespace |
container | --network container:NAME_OR_ID | The container shares the Network namespace with another container |
none | --network none | The container has an independent Network namespace, but it does not make any network settings, such as assigning veth pair and bridge connection, configuring IP, etc. |
bridge | --network | bridge default mode |
bridge mode
When the Docker process starts, a virtual network bridge named docker0 will be created on the host, and the Docker container started on this host will be connected to this virtual bridge. The working mode of the virtual bridge is similar to that of a physical switch, so that all containers on the host are connected to a layer 2 network through the switch.
Assign an IP from the docker0 subnet to the container, and set the IP address of docker0 as the default gateway of the container. Create a pair of virtual network card veth pair devices on the host. Docker puts one end of the veth pair device in the newly created container and names it eth0 (the network card of the container), and puts the other end in the host with a similar name like vethxxx Name it and add this network device to the docker0 bridge. It can be viewed through the brctl show command.
The bridge mode is the default network mode of docker, if the --network parameter is not written, it is the bridge mode. When using docker run -p, docker actually makes DNAT rules in iptables to implement port forwarding. You can use iptables -t nat -vnL to view.
The bridge mode is shown in the figure below:
Assuming that an nginx is running in docker2 in the above figure, let’s think about a few questions:
-
Is it possible to communicate directly between two containers on the same host? For example, can you directly access the nginx site of docker2 on docker1?
-
Can I directly access the nginx site of docker2 on the host machine?
-
How to access this nginx site on node1 on another host? DNAT release?
The Docker bridge is virtualized by the host, not a real network device, and cannot be addressed by the external network, which also means that the external network cannot access the container through direct Container-IP. If the container wants to be accessed by external access, it can be accessed by mapping the container port to the host host (port mapping), that is, when docker run creates the container, it is enabled by the -p or -P parameter, and when accessing the container, it passes [host IP]: [container port] to access the container.
container mode
This mode specifies that a newly created container shares a Network Namespace with an existing container instead of sharing it with the host. The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container. Similarly, in addition to the network aspects of the two containers, other things such as file systems and process lists are still isolated. The processes of the two containers can communicate through the lo network card device. The container mode is shown in the figure below:
host mode
If the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port. However, other aspects of the container, such as the file system, process list, etc., are still isolated from the host.
The container using the host mode can directly use the IP address of the host to communicate with the outside world, and the service port inside the container can also use the port of the host without NAT. The biggest advantage of the host is that the network performance is better, but the docker host already has The used port can no longer be used, and the isolation of the network is not good.
The Host mode is shown in the figure below:
none mode uses none mode, the Docker container has its own Network Namespace, but does not perform any network configuration for the Docker container. That is to say, this Docker container has no network card, IP, routing and other information. We need to add a network card, configure IP, etc. for the Docker container by ourselves.
In this network mode, the container only has the lo loopback network and no other network cards. The none mode can be specified by --network none when the container is created. This type of network cannot be connected to the Internet, and a closed network can well guarantee the security of the container.
Application scenario:
-
Start a container to process data, such as converting data formats
-
Some background computing and processing tasks
The none mode is shown in the figure below:
docker network inspect bridge #View detailed configuration of bridge network
docker container network configuration
The Linux kernel implements the creation of namespaces
ip netns command
You can use the ip netns command to complete various operations on the Network Namespace. The ip netns command comes from the iproute installation package. Generally, the system will install it by default. If not, please install it yourself.
Note: The ip netns command requires sudo privileges to modify the network configuration.
You can use the ip netns command to complete the related operations on Network Namespace, and you can use the ip netns help command to view the help information of the command:
[root@localhost ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id
By default, there is no Network Namespace in the Linux system, so the ip netns list command will not return any information.
Create a Network Namespace
Create a namespace named ns0 with the command:
[root@localhost ~]# ip netns list [root@localhost ~]# ip netns add ns0 [root@localhost ~]# ip netns list ns0
The newly created Network Namespace will appear in the /var/run/netns/ directory. If a namespace with the same name already exists, the command will report the error Cannot create namespace file "/var/run/netns/ns0": File exists.
[root@localhost ~]# ls /var/run/netns/ ns0 [root@localhost ~]# ip netns add ns0 Cannot create namespace file "/var/run/netns/ns0": File exists
For each Network Namespace, it will have its own independent network card, routing table, ARP table, iptables and other network-related resources.
Operating Network Namespace
The ip command provides the ip netns exec subcommand to execute commands in the corresponding Network Namespace.
View the network card information of the newly created Network Namespace
[root@localhost ~]# ip netns exec ns0 ip addr 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
It can be seen that a lo loopback network card will be created by default in the newly created Network Namespace, and the network card is turned off at this time. At this point, try to ping the lo loopback network card, it will prompt Network is unreachable
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 connect: Network is unreachable 127.0.0.1 is the default loopback network card
Enable the lo loopback NIC with the following command:
[root@localhost ~]# ip netns exec ns0 ip link set lo up [root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms ^C --- 127.0.0.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1036ms rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
transfer device
We can transfer devices (such as veth) between different Network Namespaces. Since a device can only belong to one Network Namespace, the device cannot be seen in this Network Namespace after transfer.
Among them, the veth device is a transferable device, while many other devices (such as lo, vxlan, ppp, bridge, etc.) cannot be transferred.
veth pair
The full name of veth pair is Virtual Ethernet Pair, which is a paired port. All data packets entering from one end of the pair of ports will come out from the other end, and vice versa. The purpose of introducing veth pair is to communicate directly in different Network Namespaces, and use it to directly connect two Network Namespaces.
create veth pair
[root@localhost ~]# ip link add type veth [root@localhost ~]# ip a 4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff 5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff
It can be seen that a pair of veth pair has been added to the system at this time, and the two virtual network cards veth0 and veth1 are connected. At this time, the veth pair is in the "disabled" state.
Implement communication between Network Namespaces
Next, we use veth pair to realize the communication between two different Network Namespaces. Just now we have created a Network Namespace named ns0, and then create an information Network Namespace named ns1
[root@localhost ~]# ip netns add ns1 [root@localhost ~]# ip netns list ns 1 ns0
Then we add veth0 to ns0 and veth1 to ns1
[root@localhost ~]# ip link set veth0 netns ns0 [root@localhost ~]# ip link set veth1 netns ns1
Then we configure the ip addresses for the veth pair respectively and enable them
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up [root@localhost ~]# ip netns exec ns0 ip addr add 192.0.0.1/24 dev veth0 [root@localhost ~]# ip netns exec ns1 ip link set veth1 up [root@localhost ~]# ip netns exec ns1 ip addr add 192.0.0.2/24 dev veth1
View the status of the veth pair
[root@localhost ~]# ip netns exec ns0 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.0.0.1/24 scope global veth0 valid_lft forever preferred_lft forever inet6 fe80::8f4:e2ff:fe2d:37fb/64 scope link valid_lft forever preferred_lft forever ``` ```sql [root@localhost ~]# ip netns exec ns1 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff link-netns ns0 inet 192.0.0.2/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::5c7e:f6ff:fe59:f04f/64 scope link valid_lft forever preferred_lft forever
As can be seen from the above, we have successfully enabled this veth pair and assigned a corresponding ip address to each veth device. We try to access the ip address in ns0 in ns1
[root@localhost ~]# ip netns exec ns1 ping 192.0.0.1 PING 192.0.0.1 (192.0.0.1) 56(84) bytes of data. 64 bytes from 192.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from 192.0.0.1: icmp_seq=2 ttl=64 time=0.041 ms ^C --- 192.0.0.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.033/0.037/0.041/0.004 ms [root@localhost ~]# ip netns exec ns0 ping 192.0.0.2 PING 192.0.0.2 (192.0.0.2) 56(84) bytes of data. 64 bytes from 192.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 64 bytes from 192.0.0.2: icmp_seq=2 ttl=64 time=0.025 ms ^C --- 192.0.0.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1038ms rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
It can be seen that the veth pair successfully realizes the network interaction between two different Network Namespaces.
Four network mode configurations
bridge mode configuration
[root@localhost ~]# docker run -it --name ti --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1032 (1.0 KiB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Adding --network bridge when creating a container is consistent with not adding --network option
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:696 (696.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
none mode configuration
[root@localhost ~]# docker run -it --name t1 --network none --rm busybox / # ifconfig -a lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
container mode configuration
Start the first container
[root@localhost ~]# docker run -dit --name b3 busybox af5ba32f990ebf5a46d7ecaf1eec67f1712bbef6ad7df37d52b7a8a498a592a0 [root@localhost ~]# docker exec -it b3 /bin/sh / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:906 (906.0 B) TX bytes:0 (0.0 B)
Start the second container
[root@localhost ~]# docker run -it --name b2 --rm busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 inet addr:172.17.0.3 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:516 (516.0 B) TX bytes:0 (0.0 B)
You can see that the IP address of the container named b2 is 10.0.0.3, which is different from the IP address of the first container, which means that the network is not shared. If we change the startup method of the second container, It is possible to make the IP of the container named b2 consistent with the IP of the B3 container, that is, to share the IP, but not to share the file system.
[root@localhost ~]# docker run -it --name b2 --rm --network container:b3 busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1116 (1.0 KiB) TX bytes:0 (0.0 B) At this point we create a directory on the b1 container / # mkdir /tmp/data / # ls /tmp data
Check the /tmp directory on the b2 container, and you will find that there is no such directory, because the file system is in an isolated state, and it is only shared with the network.
Deploy a site on the b2 container
/ # echo 'hello world' > /tmp/index.html / # ls /tmp index.html / # httpd -h /tmp / # netstat -antl Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::80 :::* LISTEN
Use the local address on the b1 container to access this site
/ # wget -O - -q 172.17.0.2:80 hello world
host mode configuration
Directly specify the mode as host when starting the container
[root@localhost ~]# docker run -it --name b2 --rm --network host busybox / # ifconfig docker0 Link encap:Ethernet HWaddr 02:42:B8:7F:8E:2C inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 inet6 addr: fe80::42:b8ff:fe7f:8e2c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:116 (116.0 B) TX bytes:1664 (1.6 KiB) ens33 Link encap:Ethernet HWaddr 00:0C:29:95:19:47 inet addr:192.168.203.138 Bcast:192.168.203.255 Mask:255.255.255.0 inet6 addr: fe80::2e61:1ea3:c05a:3d9b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9626 errors:0 dropped:0 overruns:0 frame:0 TX packets:3950 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3779562 (3.6 MiB) TX bytes:362386 (353.8 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth09ee47e Link encap:Ethernet HWaddr B2:10:53:7B:66:AE inet6 addr: fe80::b010:53ff:fe7b:66ae/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:158 (158.0 B) TX bytes:1394 (1.3 KiB)
At this point, if we start an http site in this container, we can directly use the IP of the host machine to directly access the site in this container in the browser.
Common Operations for Containers
View the hostname of the container
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox / # hostname 48cb45a0b2e7
Inject the hostname at container startup
[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ljl --rm busybox / # hostname ljl / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3 ljl / # cat /etc/resolv.conf # Generated by NetworkManager search localdomain nameserver 192.168.203.2 / # ping www.baidu.com PING www.baidu.com (182.61.200.7): 56 data bytes 64 bytes from 182.61.200.7: seq=0 ttl=127 time=31.929 ms 64 bytes from 182.61.200.7: seq=1 ttl=127 time=41.062 ms 64 bytes from 182.61.200.7: seq=2 ttl=127 time=31.540 ms ^C --- www.baidu.com ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 31.540/34.843/41.062 ms
Manually specify the DNS to use for the container
[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ljl --dns 114.114.114.114 --rm busybox / # cat /etc/resolv.conf search localdomain nameserver 114.114.114.114 / # nslookup -type=a www.baidu.com Server: 114.114.114.114 Address: 114.114.114.114:53 Non-authoritative answer: www.baidu.com canonical name = www.a.shifen.com Name: www.a.shifen.com Address: 182.61.200.6 Name: www.a.shifen.com Address: 182.61.200.7
Manually inject the hostname-to-IP address mapping into the /etc/hosts file
[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ljl --add-host www.a.com:1.1.1.1 --rm busybox / # cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 1.1.1.1 www.a.com 172.17.0.3 ljl
open container port
When executing docker run, there is a -p option, which can map the application port in the container to the host, so that the external host can access the application in the container by accessing a certain port of the host.
The -p option can be used multiple times, and the port it can expose must be the port that the container is actually listening on.
The usage format of the -p option:
-
-p containerPort
-
Maps the specified container port to a dynamic port for all addresses on the host
-
-p hostPort : containerPort
-
Map the container port containerPort to the specified host port hostPort
-
-p ip :: containerPort
-
Map the specified container port containerPort to the dynamic port of the host specified ip
-
-p ip : hostPort : containerPort
-
Map the specified container port containerPort to the port hostPort of the host specified ip
A dynamic port refers to a random port, and the specific mapping result can be viewed with the docker port command.
[root@localhost ~]# docker run -dit --name web1 -p 192.168.203.138::80 httpd e97bc1774e40132659990090f0e98a308a7f83986610ca89037713e9af8a6b9f [root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e97bc1774e40 httpd "httpd-foreground" 6 seconds ago Up 5 seconds 192.168.203.138:49153->80/tcp web1 af5ba32f990e busybox "sh" 48 minutes ago Up 48 minutes b3 [root@localhost ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 192.168.203.138:49153 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:*
After the above command is executed, the front end will always be occupied. Let’s open a new terminal connection to see what port 80 of the container is mapped to on the host.
[root@localhost ~]# docker port web1 80/tcp -> 192.168.203.138:49153
It can be seen that port 80 of the container is exposed to port 49153 of the host. At this time, we visit this port on the host to see if we can access the site in the container.
[root@localhost ~]# curl http://192.168.203.138:49153 <html><body><h1>It works!</h1></body></html>
The iptables firewall rules will be automatically generated when the container is created, and the rules will be automatically deleted when the container is deleted.
[root@localhost ~]# iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 3 164 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 4 261 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 2 120 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain DOCKER (2 references) pkts bytes target prot opt in out source destination 1 60 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 192.168.203.138 tcp dpt:49153 to:172.17.0.3:80
Map the container port to a random port of the specified IP
[root@localhost ~]# docker run -dit --name web1 -p 192.168.203.138::80 httpd
View port mapping on another terminal
[root@localhost ~]# docker port web1 80/tcp -> 192.168.203.138:49153
Customize the network attribute information of the docker0 bridge
To customize the network attribute information of the docker0 bridge, you need to modify the /etc/docker/daemon.json configuration file
[root@localhost ~]# cd /etc/docker/ [root@localhost docker]# vim daemon.json [root@localhost docker]# systemctl daemon-reload [root@localhost docker]# systemctl restart docker { "registry-mirrors": ["https://4hygggbu.mirror.aliyuncs.com/"], "bip": "192.168.1.5/24" } EOF ``` ```ruby [root@localhost ~]# vim /lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock [root@localhost ~]# systemctl daemon-reload [root@localhost ~]# systemctl restart docker
Pass the "-H|--host" option directly to dockerd on the client to specify which host to control the docker container on
[root@localhost ~]# docker -H 192.168.203.138:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e97bc1774e40 httpd "httpd-foreground" 30 minutes ago Up 11 seconds 192.168.203.138:49153->80/tcp web1 af5ba32f990e busybox "sh" About an hour ago Up 14 seconds b3
create new network
[root@localhost ~]# docker network create ljl -d bridge 883eda50812bb214c04986ca110dbbcb7600eba8b033f2084cd4d750b0436e12 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 0c5f4f114c27 bridge bridge local 8c2d14f1fb82 host host local 883eda50812b ljl bridge local 85ed12d38815 none null local
Create an additional custom bridge, different from docker0
[root@localhost ~]# docker network create -d bridge --subnet "192.168.2.0/24" --gateway "192.168.2.1" br0 af9ba80deb619de3167939ec5b6d6136a45dce90907695a5bc5ed4608d188b99 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE af9ba80deb61 br0 bridge local 0c5f4f114c27 bridge bridge local 8c2d14f1fb82 host host local 883eda50812b ljl bridge local 85ed12d38815 none null local
Use the newly created custom bridge to create the container:
[root@localhost ~]# docker run -it --name b1 --network br0 busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:02:02 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:962 (962.0 B) TX bytes:0 (0.0 B)
Create another container, using the default bridge bridge:
[root@localhost ~]# docker run --name b2 -it busybox / # ls bin dev etc home proc root sys tmp usr var / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:01:03 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:516 (516.0 B) TX bytes:0 (0.0 B)