Detailed explanation of Docker network mode

Table of contents

Docker network mode

1. Host mode

Two, container mode

Three, none mode

Four, bridge mode

Five, Overlay mode


Docker network mode

        Three networks are automatically created when you install Docker, and you can use the docker network ls command to list these networks.

[root@docker ~]# docker network ls

        When we use docker run to create a container, we can use the --net option to specify the network mode of the container.

Docker has the following 4 network modes:

Host mode : Use --net=host to specify.

Container mode: use --net=container:NAME_or_ID to specify.

None mode : Use --net=none to specify.

Bridge mode : Use --net=bridge to specify, the default setting.

1. Host mode

        The bottom layer of Docker uses Linux Namespaces technology to isolate resources, such as PID Namespace isolation process, Mount Namespace isolation file system, Network Namespace isolation network, etc.

        A Network Namespace provides an independent network environment, including network cards, routes, Iptables rules, etc., which are isolated from other Network Namespaces.

        A Docker container is generally assigned an independent Network Namespace. However, if the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but will share a Root Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port. This network mode is not recommended for security reasons.

        We start a Docker container with a WEB application in Host mode on the 192.168.100.131/24 machine, listening to TCP port 80. When we execute any ifconfig command in the container to view the network environment, all we see is the information on the host.

        To access the application in the container from the outside world, you can directly use 192.168.100.131:80 without any NAT conversion, just like running directly in the host machine.

        However, other aspects of the container, such as the file system, process list, etc., are still isolated from the host.

[root@docker ~]# docker run -itd --net=host --name=host busybox

[root@docker ~]# docker exec -it host ifconfig

[root@docker ~]# ifconfig

Ps Note: The two are exactly the same, which is equivalent to the host-only mode of the virtual machine!

Two, container mode

        This mode can specify that a newly created container shares a Network Namespace with an existing container instead of sharing it with the host.

         The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container.

        Similarly, in addition to the network aspects of the two containers, other things such as file systems and process lists are still isolated. The processes of the two containers can communicate through the lo network card device.

        Using --net=container:container_id/container_name, the ip seen by multiple containers using a common network is the same.

[root@docker ~]# docker run -itd --name=con1 busybox

[root@docker ~]# docker exec -it con1 ifconfig

[root@docker ~]# docker run -itd --net=container:con1 --name=con2 busybox

[root@docker ~]# docker exec -it con2 ifconfig

  Ps Remarks: Similar to twins, use the same network card information!

Three, none mode

        In this mode, the Docker container has its own Network Namespace, but does not perform any network configuration for the Docker container.

        In other words, this Docker container does not have network card, IP, routing and other information. We need to add a network card, configure IP, etc. for the Docker container by ourselves.

Specify with --net=none, no network will be configured in this mode.

[root@docker ~]# docker run -itd --name=none --net=none busybox

[root@docker ~]# docker exec -it none ifconfig

Four, bridge mode

        The bridge mode is the default network setting of Docker, which belongs to a NAT network model. When the Docker daemon starts, it will establish a docker0 bridge (specified by the -b parameter). When each container starts using the bridge mode, Docker will set up a bridge for The container creates a pair of virtual network interface (veth pair) devices, one end of which is in the container's Network Namespace, and the other end is in docker0, thus realizing the communication between the container and the host.

        In the bridge mode, the communication between the Docker container and the external network is controlled by iptables rules, which is also an important reason for the low performance of the Docker network.

        Use iptables -vnL -t nat to view the NAT table, and you can see the container bridging rules in Chain Docker.

[root@docker ~]# iptables -vnL -t nat

Five, Overlay mode

        This is Docker's native cross-host multi-subnet network model. When creating a new network, Docker will create a Network Namespace on the host. There is a bridge in the Network Namespace, and there is a vxlan interface on the bridge. Each The network occupies a vxlan ID. When a container is added to the network, Docker will allocate a pair of veth network card devices, similar to the bridge mode. One end is in the container, and the other end is in the local Network Namespace.

        Containers A, B, and C are all on host A, while containers D and E are on host B. Now, through the Overlay network model, containers A, B, and D are in the same subnet, while containers C and E are in another in the subnet.

        There is a vxlan ID in the Overlay, and the value ranges from 256 to 1000. The vxlan tunnel will connect each network sandbox with the same ID to form a subnet.

Guess you like

Origin blog.csdn.net/2302_77582029/article/details/132106721