Docker network details and actual combat

1. Docker default network mode

View all Docker networks with the following command:

docker network ls

insert image description here

Docker provides four network modes by default, indicating:

  1. bridge: The default network of the container is the bridge mode (the network built by yourself also uses the bridge mode by default, and the default for starting the container is the bridge mode). This mode will assign and set IP for each container, connect the container to a docker0 virtual bridge, and communicate with the host through the docker0 bridge and the Iptables nat table configuration.

  2. none: No network is configured, the container has an independent Network namespace, but no network settings are performed on it, such as assigning veth pair and bridge connections, configuring IP, etc.

  3. host: The container and the host share the Network namespace. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port.

  4. container: The created container will not create its own network card, and configure its own IP container network connection. A container shares a Network namespace (shared IP, port range) with another container.

The container uses the bridge network mode by default. We use this docker run --network=option to specify the network used by the container:

  • host mode: specified with --net=host.
  • none mode: specified with --net=none.
  • bridge mode: specified with --net=bridge, the default setting.
  • container mode: specified with --net=container:NAME_or_ID.

1.1 host mode

Brief description
of Namespace: Docker uses Linux Namespaces technology to isolate resources, such as PID Namespace isolation process, Mount Namespace isolation file system, Network Namespace isolation network, etc.

A Network Namespace provides an independent network environment, including network cards, routing, Iptable rules, etc., which are isolated from other Network Namespaces. A Docker container is generally allocated a separate Network Namespace.

If the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port. However, other aspects of the container, such as the file system, process list, etc., are still isolated from the host.

Containers using host mode can directly use the host's IP address to communicate with the outside world. The service port inside the container can also use the host's port, without NAT. The biggest advantage of the host is that the network performance is better, but the docker host has The port used can no longer be used, and the isolation of the network is not good. The model diagram of Host mode is shown in the following figure:
insert image description here
Note: eth0 is the host's internal network address, and 10.126.130.4 is the host's internal network address.

1.2 container mode

This mode specifies that newly created containers share a Network Namespace with an existing container, rather than with the host. The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container. Similarly, in addition to the network, the two containers are isolated from other aspects such as file systems, process lists, etc. The processes of the two containers can communicate through the lo NIC device. The schematic diagram of the Container mode model is as follows:

insert image description here

1.3 none mode

With none mode, the Docker container has its own Network Namespace, however, no network configuration is done for the Docker container. In other words, this Docker container has no network card, IP, routing and other information. We need to add network cards, configure IP, etc. to the Docker container ourselves.

In this network mode, the container only has the lo loopback network and no other network cards. The none mode can be specified with --network=none when the container is created. This type of network cannot be networked, and a closed network can ensure the security of the container.

Schematic diagram of None mode:

insert image description here

1.4 bridge mode

When the Docker process starts, a virtual bridge named docker0 is created on the host, and the Docker container started on this host is connected to this virtual bridge. A virtual bridge works like a physical switch, so that all containers on the host are connected to a layer 2 network through the switch.

Allocate an IP from the docker0 subnet for the container to use, and set the IP address of docker0 as the container's default gateway. Create a pair of virtual network card veth pair devices on the host. Docker puts one end of the veth pair device in the newly created container and names it eth0 (the container's network card), and the other end is placed in the host, with a similar name like vethxxx Name and add this network device to the docker0 bridge. It can be viewed with the brctl show command.

Bridge mode is the default network mode of docker. If you do not write the --net parameter, it is bridge mode. When using docker run -p, docker actually makes DNAT rules in iptables to realize the port forwarding function. You can use iptables -t nat -vnL to view. The bridge mode is shown in the following figure:

insert image description here

When the Docker server starts, a virtual bridge named docker0 will be created on the host, and the Docker container started on this host will be connected to this virtual bridge. The technology used by Docker0 is the evth-pair technology. In the default bridge network mode, every time we start a Docker container, Docker will configure an IP for the Docker container.

The Docker container completes the bridge network configuration process as follows:

  1. Create a pair of virtual NIC veth pair devices on the host. Veth devices always come in pairs, they form a data channel, and data enters from one device and comes out from the other device. Therefore, veth devices are often used to connect two network devices.
  2. Docker places one end of the veth pair device in the newly created container and names it eth0. The other end is placed in the host, named something like veth65f9, and this network device is added to the docker0 bridge.
  3. Allocate an IP from the docker0 subnet for the container to use, and set the IP address of docker0 as the container's default gateway.

Execute the command ip addr, you can see that after installing Docker, the default host has three addresses.

insert image description here

How does Docker handle container network access? Start a container, enter the container using ip addrview network settings

docker run -d -P --name tomcat01 tomcat:8.0

insert image description here

You can see that after the container is started, Docker assigns the container an IP address in the format of eth0@if262. The host runs ip addrthe command to view the changes:insert image description here

You can see that a new network card will be created inside the container and the Linux host, and these two network cards are paired. The technique used is evth-pair. evth-pair is a pair of virtual device interfaces, they appear in pairs, one is connected to the protocol, and the other is connected to each other. The evth-pair acts as a bridge to connect various virtual network devices.

==Linux hosts can directly network to Docker containers. Docker containers and containers can also be directly connected to the network. ==The following specific experiments verify it.

docker exec -it tomcat01 ip addr

insert image description here

Next, start a Tomcat container and try to see if the network connection between the containers is successful.

insert image description here
insert image description here

Try to ping the tomcat02 container in the tomcat01 container, you can see that the two containers can be connected. The network interaction model diagram between two Tomcat containers is as follows:

insert image description here
illustrate:

Both Tomcat01 and Tomcat02 use the common router docker0. All containers are routed by docker0 without specifying a network. Docker will assign a random available IP address to our container by default.

The general model of container network interconnection, as shown in the following figure:

insert image description here

All network interfaces in Docker are virtual. As long as the container is deleted, the bridge corresponding to the container is also deleted.

2. Container interconnection

In the microservice deployment scenario, the registry uses the service name to uniquely identify the microservice, and the IP address corresponding to the microservice may change when we go online and deploy, so we need to use the container name to configure the network connection between containers . This can be done with --link.

First of all, if the connection is not set, it is impossible to connect by the container name.
insert image description here

Let's start a Tomcat container Tomcat03 and use --link to connect to the already started Tomcat02 container. In this way, the container Tomcat03 can connect to the container Tomcat02 through the container name Tomcat02.
insert image description here
But conversely, the container Tomcat02 cannot directly ping the container Tomcat03 through the container name Tomcat03.
insert image description here
This is because --linkthe principle is to add the mapping of the container name and ip address to the /etc/hosts file on the specified running container, as follows:

insert image description here
The tomcat02 container cannot connect to tomcat03 through the container name because the mapping between the container name tomcat03 and the ip address is not added to the tomcat02 container.

insert image description here
Currently --link's way of setting container interconnects is deprecated. Because docker0 does not support container name access, more custom networks are selected.

3. Custom Network

Because of docker0, it is not accessible by container name by default. You need to set up the connection through --link. This kind of operation is more troublesome. The more recommended way is to customize the network. All containers use this custom network, so that they can access each other through the container name.

See below for network related commands

docker network --help

insert image description here
View the details of the default network bridge
insert image description here

View the relevant parameters of the network create command
insert image description here

Customize a network below

docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet

insert image description here
Parameter Description:

 --driver bridge   #指定bridge驱动程序来管理网络
--subnet 192.168.0.0/16 #指定网段的CIDR格式的子网
--gateway 192.168.0.1 	#指定主子网的IPv4或IPv6网关

After the network mynet is successfully created, check the network information:

docker network inspect mynet

insert image description here

Next, start two containers, specify the custom network mynet, and test whether the container under the custom network can directly access the network through the container name.

docker run -d -P --name tomcat-net-01 --net mynet tomcat 
docker run -d -P --name tomcat-net-02 --net mynet tomcat 
docker network inspect mynet

insert image description here

Let's use the container name to test whether the network communication between the container tomcat-net-01 and the container tomcat-net-02 is normal.
insert image description here

It can be found that under our custom network, the containers can communicate via both the container name and the IP address. Our custom network has already helped us maintain network communication between containers by default, which is the recommended way to achieve network interconnection.

4. Interconnection between Docker networks

Without this setting, containers on different networks cannot connect to the network. As shown in the figure, the network model diagram of two different networks docker0 and custom network mynet:

insert image description here

Start the container tomcat-01 under the default network bridge, and try to connect to the tomcat-net-01 container under the mynet network.

insert image description here
It can be seen that there is no network connection. If the containers between different Docker networks need to be connected, the container as the caller needs to register an ip on the network where the callee is located. A docker connectcommand is required.

The following sets up the container tomcat-01 to connect to the mynet network. And check the network details of mynet, you can see that an ip address is assigned to the container tomcat-01.

docker network connect mynet tomcat-01

insert image description here

After the setup is complete, we can implement container interconnection between different networks.

insert image description here

5. Docker network practical exercise

5.1 Redis cluster deployment

The following deploys the Redis cluster with three masters and three slaves as shown in the figure
insert image description here

First stop all containers

docker rm -f $(docker ps -aq)

insert image description here

Create a custom network with network name redis

docker network create redis --subnet 172.38.0.0/16

insert image description here
Create six Redis configuration information with the following script:

for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379 
bind 0.0.0.0
cluster-enabled yes 
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

insert image description here
Next, start six Redis containers and set the corresponding container data volume mounts.

#第1个Redis容器
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
    -v /mydata/redis/node-1/data:/data \
    -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第2个Redis容器
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
    -v /mydata/redis/node-2/data:/data \
    -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第3个Redis容器
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
    -v /mydata/redis/node-3/data:/data \
    -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第4个Redis容器
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
    -v /mydata/redis/node-4/data:/data \
    -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第5个Redis容器
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
    -v /mydata/redis/node-5/data:/data \
    -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第6个Redis容器
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
    -v /mydata/redis/node-6/data:/data \
    -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
    -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

Or start 6 Redis containers at once via script:

for port in $(seq 1 6); \
do
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; \
done

Execute the above script and the results are as follows: Next,
insert image description here
enter the redis-1 container to create a cluster

redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

View cluster information

redis-cli -c
cluster info

insert image description here
View the node information cluster nodes, you can clearly see the master-slave relationship of the Redis node.
insert image description here

Test whether the master-slave replication takes effect, set a key, you can see that we are redirected to the Redis-3 node, and the Redis-3 node handles this operation.
insert image description here

Create a new session, stop the Redis-3 container service, then reconnect the Redis-cli client, obtain k1 again, and redirect it to the slave node Redis-4 of the Redis-3 node for processing.

insert image description here

5.2 SpringBoot project packaging mirror

The SpringBoot project packaging image is divided into the following five steps
(1) Build the SpringBoot project
Build the simplest Spring Boot project in the idea, write an interface, and start the project to test whether it can be called normally.


/**
 * @author ethan
 * @since 2020-04-22 20:26:31
 */
@RestController
public class HelloController {
    
    

    @GetMapping("/hello")
    public String hello(){
    
    
        return "Hello World!";
    }
}

(2) Package the application
Use Maven's package to package the Spring Boot project and generate a jar package
insert image description here

(3) Write Dockerfile
Write Dockerfile, the content is as follows

FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]

(4) Build the image
Send the packaged jar package and the written Dockerfile to the server and
insert image description here
use the build command to build the image

 docker build -t demo .

insert image description here

(5) Release and run
After completing the construction of the image, run the image to test whether the /hello interface can be accessed normally.

 docker run -d -p 8080:8080 demo
 curl localhost:8080/hello

insert image description here

Reference:
1. https://www.bilibili.com/video/BV1og4y1q7M4
2. Docker network detailed explanation - principle
3. Docker four network modes

Guess you like

Origin blog.csdn.net/huangjhai/article/details/120425457