In-depth understanding of Docker network - multi-machine communication and Docker Swarm actual combat

The network is the soul of cloud native, this article will give you a deep understanding of how Docker communicates among multiple machines

Table of contents

introduction

1. Principle of Docker overlay network implementation

(1) Docker overlay network internal architecture

(2) Docker overlay network transmission process

2. Introduction to Docker Swarm

(1) Structure diagram

(2) Core concepts

Node

Service

Task

Node, Service and Task relationship diagram

3. Docker Swarm in action

(1) Environmental preparation

(2) Build a Swarm cluster

(3) Practical cases

Do not use Docker Stack

Using Docker Stack

reference documents


introduction

The previous article introduced the principle of Docker stand-alone communication: link

In order to ensure high availability, it is more common to deploy different containers on multiple machines. This article continues to introduce Docker's multi-machine communication principle and the actual combat of Docker Swarm

There are many ways to implement Docker multi-machine communication, such as direct routing, bridging, and Overlay tunneling. The current version of Docker (20.10.21) provides various solutions, such as MacVlan network and overlay network . The overlay network is actually the most mainstream container cross-node data transmission and routing solution

Starting from version 1.12  , Docker officially provides the Docker Swarm container orchestration tool, which is built into the Docker Engine. The bottom layer of Docker Swarm uses the Docker overlay network, so this article mainly introduces the implementation principle of the overlay network

1. Principle of Docker overlay network implementation

Docker overlay network is a distributed network based on VXLAN  , which can be used for Docker multi-machine communication

VXLAN uses tunneling technology on the underlying physical network (Underlay). The Overlay logical network built with the help of the UDP layer has little impact on the original network architecture. A new layer can be erected without any changes to the original network. network of

The Linux kernel has added support for VXLAN in version 3.7.0 , and Docker uses the native VXLAN feature in the Linux kernel to realize the overlay network

(1) Docker overlay network internal architecture

The two machines can communicate with each other through the physical network, as shown in the following figure

On this basis, VXLAN tunnel technology is used to create a virtual Layer 2 overlay network, and both ends of the VXLAN tunnel are VXLAN tunnel endpoints (VXLAN Tunnel Endpoint, VTEP). The main function of VTEP  is the encapsulation and decapsulation of VXLAN packets

Usually, you need to create a default bridge for the Overlay network in the host to communicate with the physical network. After Docker Swarm is initialized, the engine will automatically create  a docker_gwbridge of bridge type , and each host can only have at most one docker_gwbridge, as shown in the figure below

When creating an Overlay network, the Docker engine creates the required network infrastructure resources for each host. Every time an Overlay is created, a Linux bridge will be associated with a VXLAN interface, and the location and status of the VTEP will be saved in the distributed Overlay control layer. But only when the container is connected to the Overlay network, the Docker engine will instantiate the Overlay network on the host, which can prevent the creation of invalid network instances when some non-existing containers are connected

After associating the container and instantiating the network, the Docker overlay driver assigns a unique IP address to each container in the cluster, such as eth1: 10.0.0.3 and eth1: 10.0.0.6

During the communication process, the Docker overlay driver encapsulates the information required for inter-container communication into the VXLAN Header, including the source container IP and target container IP

(2) Docker overlay network transmission process

As shown in the figure, assuming that the two containers on the two CentOS hosts are connected to the ov-net network, the web-app container on CentOS-A wants to send a Redis command to redis-demo on CentOS-B Redis service to query the current number of visits to the website

The transmission process is as follows

  • web-app performs DNS resolution on redis-demo. Because the two containers are in the same overlay network, the local DNS service in the Docker engine on CentOS-A can resolve the overlay IP address of redis-demo to 10.0.0.6
  • web-app generates L2 packets pointing to redis-demo MAC address
  • The overlay driver is responsible for encapsulating the generated packets into the VXLAN Header. In addition, encapsulating the physical IP addresses of the current host and the target host into the Underlay IP Header
  • After the packet is encapsulated, it is sent out, and the physical network is responsible for routing or bridging it to the correct host
  • The data packet arrives at the eth0 interface of CentOS-B, and the Overlay network driver decapsulates the original L2 packet and sends it to the eth0 interface of redis-demo, and then transmits it to the service inside the container

2. Introduction to Docker Swarm

(1) Structure diagram

Docker Swarm is a container orchestration tool officially provided by Docker, which can easily build cluster services on multiple servers or hosts. The bottom layer of Docker Swarm uses the Docker overlay network mentioned above, and the cluster management and container orchestration features come from another independent project of Docker, swarmkit

A swarm cluster includes one or more management nodes (Manager) and several working nodes (Worker). The cluster architecture diagram is as follows

(2) Core concepts

Node

A Node is an instance of the Docker engine in a Swarm cluster, and one or more Nodes can be deployed on a physical machine or cloud server. Node can be divided into management nodes and working nodes according to their roles

The role of the management node is as follows

  • Cluster management, maintaining the Swarm state, electing a Leader node to perform scheduling tasks according to the Raft protocol
  • Can act as a worker node by default

An agent runs on the working node, receives and executes the tasks assigned by the management node, and reports the task execution status to the management node

Service

Service is the definition of a task (Task), which is the core structure of the Swarm system. When creating a service, you need to specify the container image and the commands that need to be executed in the container

Services can be divided into replica services and global services. For the copy service, the management node dispatches corresponding tasks to the working nodes according to the set number of copies; for the global service, each node has and only one task

Task

Task is the smallest scheduling unit of Swarm, including a container and commands that need to be executed in the container

It should be noted that the life cycle of a Task is unidirectional. For example, the state can go from NEW to RUNNING to COMPLETE, but not the other way around. That is to say, once the task is over and needs to be executed again, a new task will be created and executed

Node, Service and Task relationship diagram

Service is equivalent to drawing a construction blueprint, and Swarm assigns tasks to different Nodes according to the details of the blueprint

You can refer to the figure below for understanding 

3. Docker Swarm in action

Based on the above introduction, the next step is to conduct Docker Swarm actual combat

(1) Environmental preparation

First prepare 3 cloud servers, the IP addresses and planned node roles are as follows, and prepare to build a Swarm cluster with 1 master and 2 slaves

121.37.169.103 # Manager节点,设置hostname为manager
1.116.156.102  # Worker节点,设置hostname为worker-01
139.196.219.92 # Worker节点,设置hostname为worker-02

Set up security groups for 3 cloud servers, and add ports 2377, 6379, 8080, and 80  to inbound rules

(2) Build a Swarm cluster

Execute the following command on the management node host

# 语法 docker swarm init --advertise-addr <MANAGER-IP>
docker swarm init --advertise-addr 121.37.169.103

Execute the following command on the two hosts corresponding to the working node. The Token here is generated when the management node is initialized and needs to be consistent

docker swarm join --token SWMTKN-1-3w41cnhrqfu8tddsmfwzr05wgt7yupyfmwkavck6cdftfv47r9-19tgpf75tb3hqtaob5wz8dmtu 121.37.169.103:2377

# 打印的日志如下
This node joined a swarm as a worker.

Enter the management node to view the cluster status (this command can only be executed on the management node)

docker node ls

The output information is as follows

ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
622h0w3knqdrvifyvb2thkkr6 *   manager     Ready     Active         Leader           20.10.21
gztfkjkrzrqljrhqtnpgq4b31     worker-01   Ready     Active                          20.10.21
k3r6xodo2cb5tdzks7h48zabm     worker-02   Ready     Active                          20.10.21

At this time, check the Docker network and find that the three hosts have docker_gwbridge and ingress , and the ingress network IDs of the three hosts are all pupcl8y1k6kk, which also verifies that the bottom layer of Docker Swarm uses the Docker overlay network for multi-machine communication

docker network ls

(3) Practical cases

As shown in the figure, prepare a SpringBoot project, use Redis to cache the number of visits of the application, plan to create 3 instances for the application, and use nginx for load balancing

The SpringBoot project docker-demo provides two Rest APIs, namely

host:8080/hello - increase the number of visits

host:8080/visitCount - returns the current total number of visits

Project address: [email protected]:jason315/docker-demo.git

Do not use Docker Stack

Step 1 - Create a custom overlay network

Enter the management node and execute the following command

docker network create -d overlay ov-net

Step 2 - Create Redis service

Enter the management node and execute the following command

docker service create --network ov-net --name redis-demo -p 6379:6379 redis

# 上一条命令执行成功后,查看 Redis 运行在哪个节点上
docker service ps redis-demo

It is found that Redis is running on the worker-02 node, and the corresponding IP address is 1.116.156.102

Modify the application.yml file of the docker-demo project

server:
  port: 8080
spring:
  application:
    name: docker-demo
  redis:
    host: 139.196.219.92
    port: 6379

Step 3 - Make the docker-demo project into a mirror image

# 进入 docker-demo 项目根目录
mvn clean package

A jar package will be generated in the target directory: docker-demo-0.0.1-SNAPSHOT.jar

Upload the jar package to a directory of 3 hosts, assuming /var/local/dockerdemo

Enter the /var/local/dockerdemo directory and create a Dockerfile

FROM openjdk:8

MAINTAINER jason315
LABEL name="docker-demo" version="1.0" author="jason315"

COPY docker-demo-0.0.1-SNAPSHOT.jar docker-demo-image.jar
CMD ["java", "-jar", "docker-demo-image.jar"]

Build the jar package into a Docker image on all three hosts

docker build -t docker-demo:1.0 .

Step 4 - Create docker-demo service

docker service create --name docker-demo \
    --network ov-net \
    --replicas 3 \
    docker-demo:1.0

Step 5 - Create nginx service

Create nginx.conf files in the /var/local/dockerdemo directory of the three hosts

user nginx;
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    sendfile      on;
    keepalive_timeout  65;
    server {
        listen 80;
        location / {
        proxy_pass http://balance;
      }
    }
    upstream balance {
        server docker-demo:8080;
    }
    include /etc/nginx/conf.d/*.conf;
}

Here, the service name in upstream is set to docker-demo, which was created in step 5. Since it is in the same overlay network, automatic domain name resolution can be performed 

Step 6 - Create nginx service

docker service create --name nginx-demo \
    --mount type=bind,src=/var/local/dockerdemo/nginx.conf,dst=/etc/nginx/nginx.conf \
    --network ov-net \
    -p 80:80 \
    nginx

Check which node the nginx service is on, and find that it is on the management node, and the corresponding IP address is 121.37.169.103

Step 7 - Test

After a certain number of visits through the /hello interface, call the /visitCount interface to check the total number of visits. After the test passes, the service can be stopped. The Redis service is reserved first and will be used later

# 在管理节点执行如下命令
docker service rm nginx-demo
docker service rm docker-demo

Using Docker Stack

Docker provides a mechanism similar to Docker Compose, which can describe which services the cluster needs to create through a configuration file, which requires the use of Docker Stack

Similarly, create a service.yml file in the /var/local/dockerdemo directory of the management node

version: '3'

services:
  docker-demo:
    image: docker-demo:1.0
    networks:
      - ov-net2
    deploy:
      mode: replicated
      replicas: 3
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      update_config:
        parallelism: 1
        delay: 10s
  nginx-demo:
    image: nginx
    ports:
     - 80:80
    networks:
      - ov-net2
    volumes:
     - /var/local/dockerdemo/nginx.conf:/etc/nginx/nginx.conf
    deploy:
      mode: replicated
      replicas: 1

networks:
  ov-net2:
    driver: overlay

When deploying a service with Stack, you need to specify a name (assumed to be stack-demo)

The finally created service will be prefixed with the Stack name, such as stack-demo_docker-demo, so the upstram part of nginx.conf needs to be modified appropriately

upstream balance {
    server stack-demo_docker-demo:8080;
}

Next, create a service based on service.yml

docker stack deploy -c service.yml stack-demo

After the creation is complete, you can still use docker service related commands to view

# 查看服务列表
docker service ls
ID             NAME                     MODE         REPLICAS   IMAGE             PORTS
rgk39ijvtvsq   redis-demo               replicated   1/1        redis:latest      *:6379->6379/tcp
lw737pf0uuv5   stack-demo_docker-demo   replicated   3/3        docker-demo:1.0
em7okvpd7e7a   stack-demo_nginx-demo    replicated   1/1        nginx:latest      *:80->80/tcp

# 查看 nginx 服务部署在哪里
docker service ps stack-demo_nginx-demo

The current nginx service is deployed on the management node, and the access results are as follows

At this time, check the overlay network on the management node, you can see the default ingress, the custom ov-net, and the stack-demo_ov-net2 created by Stack according to the configuration file

[root@manager dockerdemo]# docker network ls -f SCOPE=swarm
NETWORK ID     NAME                 DRIVER    SCOPE
pupcl8y1k6kk   ingress              overlay   swarm
lu3k1fu7yjw0   ov-net               overlay   swarm
wg81aqso2ovv   stack-demo_ov-net2   overlay   swarm

At this point, the underlying principles and actual combat of Docker multi-machine communication are over, but Docker Swarm is just passing by in a hurry, and it will eventually return to Kubernetes

reference documents

Use overlay networks | Docker Documentation

Github's introduction to the overlay network

VXLAN Basic Tutorial: Introduction to the Principles of VXLAN Protocol

Guess you like

Origin blog.csdn.net/u013481793/article/details/128253816