Docker automatically provides 3 types of networks after installation, you can use the docker network ls command to view:
[root@localhost ~]# docker network ls
NETWORKIDNAMEDRIVERSCOPE
cd97bb997b84 bridge bridge local
0a04824fc9b6 host host local
4dcb8fbdb599 none null local
Docker uses Linux bridge to virtualize a Docker container bridge (docker0) on the host machine. When Docker starts a container, it will assign an IP address to the container according to the network segment of the Docker bridge, which is called Container-IP. At the same time, the Docker bridge is each The default gateway for each container. Because the containers in the same host are connected to the same bridge, the containers can communicate directly through the Container-IP of the container.
2. Docker network mode
① bridge mode
When the Docker process starts, a virtual network bridge named docker0 will be created on the host, and the Docker container started on this host will be connected to this virtual bridge. The working mode of the virtual bridge is similar to that of a physical switch, so that all containers on the host are connected to a layer 2 network through the switch.
Assign an IP from the docker0 subnet to the container, and set the IP address of docker0 as the default gateway of the container. Create a pair of virtual network card veth pair devices on the host. Docker puts one end of the veth pair device in the newly created container and names it eth0 (the network card of the container), and puts the other end in the host with a similar name like vethxxx Name it and add this network device to the docker0 bridge. It can be viewed through the brctl show command.
The bridge mode is the default network mode of docker, if you do not write the –network parameter, it is the bridge mode. When using docker run -p, docker actually makes DNAT rules in iptables to realize port forwarding function, which can be viewed by using iptables -t nat -vnL.
The bridge mode is shown in the figure below:
Assuming that an nginx is running in docker2 in the above figure, you can think about the following questions:
Is it possible to communicate directly between two containers on the same host? For example, can you directly access the nginx site of docker2 on docker1?
Can I directly access the nginx site of docker2 on the host machine?
How to access this nginx site on node1 on another host? DNAT release?
The Docker bridge is virtualized by the host, not a real network device, and cannot be addressed by the external network, which also means that the external network cannot access the container through direct Container-IP. If the container wants external access to be accessible, you can map the container port to the host host (port mapping), that is, enable it with the -p or -P parameter when docker run creates the container, and use [host IP] when accessing the container: [container port] to access the container.
② container mode
This mode specifies that a newly created container shares a Network Namespace with an existing container instead of sharing it with the host. The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container. Similarly, in addition to the network, the two containers are isolated from other things such as the file system and process list, and the processes of the two containers can communicate through the lo network card device.
The container mode is shown in the figure below:
③ host mode
If the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port. However, other aspects of the container, such as the file system, process list, etc., are still isolated from the host.
The container using the host mode can directly use the IP address of the host to communicate with the outside world, and the service port inside the container can also use the port of the host without NAT. The biggest advantage of the host is that the network performance is better, but the docker host already has The used port can no longer be used, and the isolation of the network is not good.
The Host mode is shown in the figure below:
④ none mode
With the none mode, the Docker container has its own Network Namespace, however, no network configuration is performed for the Docker container. That is to say, this Docker container has no network card, IP, routing and other information. We need to add a network card, configure IP, etc. for the Docker container by ourselves.
In this network mode, the container only has the lo loopback network and no other network cards. The none mode can be specified by –network none when the container is created. This type of network cannot be connected to the Internet. A closed network can well guarantee the security of the container. .
Application scenario:
Start a container to process data, such as converting data formats;
Some background computing and processing tasks.
The none mode is shown in the figure below:
3. Network configuration of Docker container
① The Linux kernel implements the creation of namespaces
You can use the ip netns command to complete various operations on the Network Namespace. The ip netns command comes from the iproute installation package. Generally, the system will install it by default. If not, please install it yourself (Note: the ip netns command needs sudo permission to modify the network configuration ).
You can use the ip netns command to complete the related operations on Network Namespace, and you can use the ip netns help command to view the help information of the command:
[root@localhost ~]# ip netns help
Usage: ip netns list
ip netns add NAME
ip netns set NAMENETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id
By default, there is no Network Namespace in the Linux system, so the ip netns list command will not return any information.
② Create Network Namespace
Create a namespace named ns0 with the command:
[root@localhost ~]# ip netns list
[root@localhost ~]# ip netns add ns0
[root@localhost ~]# ip netns list
ns0
The newly created Network Namespace will appear in the /var/run/netns/ directory. If a namespace with the same name already exists, the command will report the error Cannot create namespace file “/var/run/netns/ns0”: File exists.
[root@localhost ~]# ls /var/run/netns/
ns0
[root@localhost ~]# ip netns add ns0
Cannot create namespace file "/var/run/netns/ns0":File exists
For each Network Namespace, it will have its own independent network card, routing table, ARP table, iptables and other network-related resources.
③ Operate Network Namespace
The ip command provides the ip netns exec subcommand to execute commands in the corresponding Network Namespace.
View the network card information of the newly created Network Namespace:
[root@localhost ~]# ip netns exec ns0 ip addr
1: lo:<LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
It can be seen that a lo loopback network card will be created by default in the newly created Network Namespace, and the network card is turned off at this time. At this point, try to ping the lo loopback network card, and it will prompt that Network is unreachable:
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect:Network is unreachable
127.0.0.1是默认回环网卡
Enable the lo loopback NIC with the following command:
[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1PING127.0.0.1(127.0.0.1)56(84) bytes of data.64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms
^C---127.0.0.1 ping statistics ---2 packets transmitted,2 received,0% packet loss, time 1036ms
rtt min/avg/max/mdev =0.029/0.029/0.029/0.000 ms
4. Transfer equipment
Devices (such as veth) can be transferred between different Network Namespaces. Since a device can only belong to one Network Namespace, the device cannot be seen in this Network Namespace after transfer.
Among them, the veth device is a transferable device, while many other devices (such as lo, vxlan, ppp, bridge, etc.) cannot be transferred.
① veth pair
The full name of veth pair is Virtual Ethernet Pair, which is a paired port. All data packets entering from one end of the pair of ports will come out from the other end, and vice versa.
The purpose of introducing veth pair is to communicate directly in different Network Namespaces, and use it to directly connect two Network Namespaces.
② create veth pair
[root@localhost ~]# ip link add typeveth[root@localhost ~]# ip a
4: veth0@veth1:<BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff
5: veth1@veth0:<BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff
It can be seen that a pair of veth pair has been added to the system at this time, and the two virtual network cards veth0 and veth1 are connected. At this time, the veth pair is in the "disabled" state.
③ Realize communication between Network Namespaces
Use veth pair to realize the communication between two different Network Namespaces. A Network Namespace named ns0 has been created just now. Next, create an information Network Namespace named ns1:
[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns list
ns1
ns0
Then join veth0 to ns0 and veth1 to ns1:
[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip link set veth1 netns ns1
Then configure the ip addresses for the veth pair respectively and enable them:
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns0 ip addr add 192.0.0.1/24 dev veth0
[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns1 ip addr add 192.0.0.2/24 dev veth1
View the status of the veth pair:
[root@localhost ~]# ip netns exec ns0 ip a
1: lo:<LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: veth0@if5:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff link-netns ns1
inet 192.0.0.1/24 scope global veth0
valid_lft forever preferred_lft forever
inet6 fe80::8f4:e2ff:fe2d:37fb/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ip a
1: lo:<LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:005: veth1@if4:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff link-netns ns0
inet 192.0.0.2/24 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::5c7e:f6ff:fe59:f04f/64 scope link
valid_lft forever preferred_lft forever
As can be seen from the above, the veth pair has been successfully enabled, and a corresponding ip address has been assigned to each veth device. Try to access the ip address in ns0 in ns1:
[root@localhost ~]# ip netns exec ns1 ping 192.0.0.1PING192.0.0.1(192.0.0.1)56(84) bytes of data.64 bytes from 192.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 192.0.0.1: icmp_seq=2 ttl=64 time=0.041 ms
^C---192.0.0.1 ping statistics ---2 packets transmitted,2 received,0% packet loss, time 1001ms
rtt min/avg/max/mdev =0.033/0.037/0.041/0.004 ms
[root@localhost ~]# ip netns exec ns0 ping 192.0.0.2PING192.0.0.2(192.0.0.2)56(84) bytes of data.64 bytes from 192.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from 192.0.0.2: icmp_seq=2 ttl=64 time=0.025 ms
^C---192.0.0.2 ping statistics ---2 packets transmitted,2 received,0% packet loss, time 1038ms
rtt min/avg/max/mdev =0.025/0.025/0.025/0.000 ms
It can be seen that the veth pair successfully realizes the network interaction between two different Network Namespaces.
Five, four network mode configuration
① bridge mode configuration
[root@localhost ~]# docker run -it --name ti --rm busybox
/ # ifconfig
eth0 Link encap:EthernetHWaddr02:42:AC:11:00:02
inet addr:172.17.0.2Bcast:172.17.255.255Mask:255.255.0.0UPBROADCASTRUNNINGMULTICASTMTU:1500Metric:1RX packets:12 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0RX bytes:1032(1.0KiB)TX bytes:0(0.0B)
lo Link encap:LocalLoopback
inet addr:127.0.0.1Mask:255.0.0.0UPLOOPBACKRUNNINGMTU:65536Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000RX bytes:0(0.0B)TX bytes:0(0.0B)
Adding –network bridge when creating a container has the same effect as not adding –network option:
You can see that the IP address of the container named b2 is 10.0.0.3, which is different from the IP address of the first container, which means that the network is not shared. You can make the IP of the container named b2 the same as the IP of the B3 container, that is, share the IP, but not share the file system.
At this point create a directory on the b1 container:
/ # mkdir /tmp/data
/ # ls /tmp
data
Check the /tmp directory on the b2 container, and you will find that there is no such directory, because the file system is in an isolated state, and it is only shared with the network. Deploy a site on the b2 container:
At this time, if you start an http site in this container, you can directly use the IP of the host machine to directly access the site in this container in the browser.
When executing docker run, there is a -p option, which can map the application port in the container to the host, so that the external host can access the application in the container by accessing a certain port of the host.
The -p option can be used multiple times, and the port it can expose must be the port that the container is actually listening to. The format of the -p option is:
-p containerPort: maps the specified container port to a dynamic port of all addresses of the host;
-p hostPort : containerPort: map the container port containerPort to the specified host port hostPort;
-p ip :: containerPort: map the specified container port containerPort to the dynamic port of the host specified ip;
-p ip : hostPort : containerPort: map the specified container port containerPort to the port hostPort of the host specified ip;
A dynamic port refers to a random port, and the specific mapping result can be viewed using the docker port command:
[root@localhost ~]# docker run -dit --name web1 -p 192.168.203.138::80 httpd
e97bc1774e40132659990090f0e98a308a7f83986610ca89037713e9af8a6b9f
[root@localhost ~]# docker ps
CONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
e97bc1774e40 httpd "httpd-foreground"6 seconds ago Up5 seconds 192.168.203.138:49153->80/tcp web1
af5ba32f990e busybox "sh"48 minutes ago Up48 minutes b3
[root@localhost ~]# ss -antl
StateRecv-QSend-QLocalAddress:PortPeerAddress:PortProcessLISTEN0128192.168.203.138:491530.0.0.0:*LISTEN01280.0.0.0:220.0.0.0:*LISTEN0128[::]:22[::]:*
After the above command is executed, the front end will always be occupied. Open a new terminal connection to see what port 80 of the container is mapped to on the host:
[root@localhost ~]# docker port web1
80/tcp ->192.168.203.138:49153
It can be seen that port 80 of the container is exposed to port 49153 of the host. At this time, visit this port on the host to see if you can access the site in the container:
The iptables firewall rules will be automatically generated with the creation of the container, and the rules will be automatically deleted with the deletion of the container:
[root@localhost ~]# iptables -t nat -nvL
ChainPREROUTING(policy ACCEPT0 packets,0 bytes)
pkts bytes target prot opt in out source destination
3164DOCKER all --**0.0.0.0/00.0.0.0/0ADDRTYPEmatch dst-typeLOCALChainINPUT(policy ACCEPT0 packets,0 bytes)
pkts bytes target prot opt in out source destination
ChainPOSTROUTING(policy ACCEPT0 packets,0 bytes)
pkts bytes target prot opt in out source destination
4261MASQUERADE all --*!docker0 172.17.0.0/160.0.0.0/000MASQUERADE tcp --**172.17.0.3172.17.0.3 tcp dpt:80ChainOUTPUT(policy ACCEPT0 packets,0 bytes)
pkts bytes target prot opt in out source destination
2120DOCKER all --**0.0.0.0/0!127.0.0.0/8ADDRTYPEmatch dst-typeLOCALChainDOCKER(2 references)
pkts bytes target prot opt in out source destination
160RETURN all -- docker0 *0.0.0.0/00.0.0.0/0160DNAT tcp --!docker0 *0.0.0.0/0192.168.203.138 tcp dpt:49153 to:172.17.0.3:80
Map the container port to a random port of the specified IP:
[root@localhost ~]# docker run -dit --name web1 -p 192.168.203.138::80 httpd
View port mapping on another terminal:
[root@localhost ~]# docker port web1
80/tcp ->192.168.203.138:49153
⑥ Customize the network attribute information of the docker0 bridge
To customize the network attribute information of the docker0 bridge, you need to modify the /etc/docker/daemon.json configuration file: