Detailed explanation of overlay network in docker

The following content is translated from the official website of Docker
The overlay (overlay) network will create a distributed network between the hosts where multiple docker daemons are located. This network overrides the host-specific network and allows containers to connect to it (including containers in cluster services) to communicate securely. Apparently docker handles the routing of datagrams between the docker daemon source and destination containers.

When you initialize a cluster (swarm) or add a docker host to an existing cluster, two new networks are created on the host:

An overlay network called ingress is used to handle control and data transmission related to cluster services. When you create a cluster service and do not connect it to a user-defined overlay network, it will be connected to the ingress network by default.

A bridge network called docker_gwbridge. Used to connect this docker daemon to other daemons in the cluster.

You can use the docker network create command to create a user-defined overlay network, just as you create a user-defined bridge network. Services and containers can be connected to multiple networks at the same time. Services and containers can only communicate with other objects on the network they are on.

Although both clustered services and individual containers can be connected to the overlay network, the default behavior and configuration of both are different. Therefore, the following content of this topic will be divided into three parts: those that apply to all overlay networks; those that apply to networks in cluster services; those that apply to overlay networks used by individual containers.

Operations applicable to all overlay networks

Create an overlay network

✅Prerequisites

Firewall rules required for the docker daemon using the overlay network

To enable docker hosts in an overlay network to communicate with each other, you need to open the following ports:

  1. TCP port 2377, used for communication related to cluster management

  2. TCP and UDP port 7946 for communication between nodes

  3. UDP port 4789, used for data transmission on the overlay network

Before you can create an overlay network, you must either initialize your docker daemon as a swarm manager via docker swarm init, or join it to an existing swarm via docker swarm join.

Either way, an overlay network called an ingress is created and used by default. Do this even if you don't plan to use cluster services.

Later you can create user-defined overlay-style networks.

To create an overlay network for use in cluster services, use the command shown below:

docker network create -d overlay my-overlay

To create a network that can be used both for swarm services and for individual containers to communicate with individual containers in other docker daemons, add the --attachable flag:

docker network create -d overlay --attachable my-attachable-overlay

You can specify IP address ranges, subnets, gateways and other options. See docker network create --help for details.

Encrypted transmission over the overlay network

All service management-related transmissions are encrypted with the AES algorithm in GCM mode by default. The management nodes in the cluster rotate the keys used for encryption every 12 hours.

To encrypt application data, add --opt encrypted when creating the network. This parameter supports IPSEC encryption at the vxlan level. This operation will cause non-negligible performance degradation, so it should be tested before applying to the production environment.

When you enable overlay encryption, docker will create IPSEC tunnels on all nodes in the network where services are scheduled. These tunnels are also encrypted using the AES algorithm in GCM mode, and the encryption key (key) is automatically rotated every 12 hours.

❌Do not add Windows nodes to the overlay network with encrypted communication.

Encrypted communication over an overlay network does not support Windows. If a Windows node tries to connect to an overlay network with encrypted communication, no error will be reported, but this node will not be able to communicate with other nodes.

Cluster-mode overlay network and individual containers

You can use the overlay network feature with --opt encrypted --attachable or by adding unmanaged containers to the network.

docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network

Modify the default ingress network

Most users do not need to configure the ingress network. But docker17.05 and later allow you to do this. This function is useful if the automatically selected subnet conflicts with the existing network in your network, or if you need to modify other underlying network settings such as MTU.

Modifying the ingress network requires deleting and recreating it. This requires you to complete the modification before creating the service in the cluster. If there are services that publish ports, delete them before you delete the ingress network.

When the ingress network does not exist, the existing services without published ports can continue to provide services, but there is no load balancing function. Those services that publish ports, such as the WordPress service that publishes port 80, will be affected.

Use docker network inspect ingress to check the ingress network, and then delete all services connected to the ingress in the container. These services are services that publish ports, such as the WordPress service that publishes port 80. If all these services are not stopped, the next step will fail.

Delete the ingress network.

docker network rm ingress

WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly created ingress networks will be impaired.

Are you sure you want to continue? [y/N]

3. Create a new overlay network with the ingress tag, plus the configuration you want. The following example configures the MTU to 1200, the subnet to 10.11.0.0/16, and the gateway to 10.11.0.2.

docker network create \
 --driver overlay \
 --ingress \
 --subnet=10.11.0.0/16 \
 --gateway=10.11.0.2 \
 --opt com.docker.network.mtu=1200 \
 my-ingress

Note: You can also name the ingerss network something else, but only one. If you try to create a second one, it will fail.

4. Restart the service you stopped in the first step.

Modify the docker_gwbridge interface

docker_gwbridge is a virtual network bridge used to connect the overlay network (including the ingress network) and the physical network of a specific docker daemon. When you initialize a cluster or add a docker host to a cluster, docker will automatically create it, but it is not a docker device. It exists in the kernel of the docker host machine. If you want to modify its configuration, you must do so before adding the host to the cluster, or temporarily detaching the host from the cluster.

stop docker

Delete the docker_gwbridge interface

sudo ip link set docker_gwbridge down
sudo ip link del name docker_gwbridge

3. Start docker, do not join or initialize the cluster

4. Use the docker network create command to manually create or recreate the docker_gwbridge bridge, plus your custom settings. The following example uses the 10.11.0.0/16 subnet.

docker network create \
--subnet 10.11.0.0/16 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
docker_gwbridge

5. Initialize or join the cluster. Since the bridge already exists, docker won't create it with default configuration anymore.

Operations in cluster services

Publish ports on the overlay network

Cluster services connected to the same overlay network will expose all ports to each other. If a port is to be accessible from outside the service, it must be published with -p or –publish in docker service create or docker service update.

Both the legacy colon-separated syntax and the new comma-separated syntax are supported.

Longer syntax is better because it's self-explanatory in a sense.

Parameter description
-p 8080:80 o or -p published=8080,target=80 Map TCP port 80 in the service to port 8080 in the route
-p 8080:80/udp or -p published=8080,target=80, protocol=udp maps UDP port 80 in the service to port 8080 in the route
-p 8080:80/tcp -p 8080:80/udp or -p published=8080,target=80,protocol=tcp -p published=8080 ,target=80,protocol=udp maps TCP port 80 in the service to port 8080 in the route, and maps UDP port 80 in the service to port 8080 in the route

parameter describe
-p 8080:80 o或-p published=8080,target=80 Map TCP port 80 in the service to port 8080 in the route
-p 8080:80/udp 或-p published=8080,target=80,protocol=udp Map UDP port 80 in the service to port 8080 in the route
-p 8080:80/tcp -p 8080:80/udp 或 -p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp Map TCP port 80 in the service to port 8080 in the route, and map UDP port 80 in the service to port 8080 in the route

Bypass routers for cluster services

By default, cluster services publish ports through the router network. When you connect to a published port on any cluster node (regardless of whether it is running the service represented by the port), you can be redirected to the node running the specified service. Docker effectively acts as a load balancer for your cluster services. Services using the routing net run in virtual IP mode (VIP). Even if a service is running on a node (via the --global flag), the routing mesh will be used. When using a routed mesh, there is no guarantee which node handles a client's request.

To bypass the routing net, you can start the service in DNS Round Robin (DNSRR) mode. By setting the --endpoint-mode flag to dnsrr. You must run your own load balancer in front of the service. A DNS query for a service name on the docker host will return the set of IP addresses of the nodes running the specified service. Configure your load balancer to use this list and balance traffic between nodes.

Separate control flow and data flow

By default, control traffic is communicated with the cluster manager and transmitted between applications running on the same network, although control traffic is encrypted. You can configure docker to use different network interfaces for different streams. When you initialize or join a cluster, specify --advertise-addr and --datapath-addr respectively. You must do this on each node that will join the cluster.

Operations available to standalone containers on the overlay network

Connect individual containers to the overlay network

The ingress network was created without the --attachable flag specified, which means that only cluster services can use it, not standalone containers. You can attach stand-alone containers to user-defined networks created with --attachablebiaoji d overlay. This gives independent containers running on different dockers the ability to communicate without having to set up routing on a specific docker host.

publish port

parameter describe
p 8080:80 Map TCP port 80 in the service to port 8080 in the route
p 8080:80/udp Map UDP port 80 in the service to port 8080 in the route
p 8080:80/tcp -p 8080:80/udp Map TCP port 80 in the service to port 8080 in the route, and map UDP port 80 in the service to port 8080 in the route

Guess you like

Origin blog.csdn.net/baidu_38956956/article/details/128318514