[Cloud native] Docker network Overlay builds Consul to achieve cross-host communication

Table of contents

1. What is an overlay network?

Implement the overlay environment


1. What is an overlay network?

        In Docker, the Overlay network is a container network driver that allows creating a virtual network on multiple Docker hosts so that containers can communicate with each other through this network.

        The Overlay network uses VXLAN (Virtual Extensible LAN) technology to realize communication between multiple hosts. Containers on each Docker host can join the Overlay network, and they can communicate as if they were on the same host, without knowing the network configuration of the underlying host.

        When a container sends a network request, the Overlay network driver encapsulates the request into a VXLAN packet and sends it to the host where the target container is located through the physical network of the underlying host. The Overlay network driver on the target host decapsulates the VXLAN packet and passes the request to the target container.

Overlay network has the following characteristics:

  1. Cross-host communication: Containers can run on different Docker hosts and communicate through the Overlay network.

  2. Automatic routing: The Overlay network driver automatically handles routing between containers, enabling containers to communicate directly by container name.

  3. Security: The Overlay network uses encryption and authentication to protect communication between containers and ensure data security.

  4. Scalability: The Overlay network can create thousands of containers on multiple hosts, and can automatically handle the dynamic addition and removal of containers.

  5. Flexibility: Overlay network can be used together with other network drivers (such as Bridge, Host, etc.) to meet different network requirements.

        The Overlay network is one of the commonly used network drivers in Docker. It provides the ability to communicate across hosts, making it easy for containers to communicate over the network in a distributed environment.

Implement the overlay environment

deployment environment

CPU name IP
consul 192.168.2.5
node1 192.168.2.6
node2 192.168.2.7

 

1. Download the consul image on the consul host

[root@localhost ~]# docker pull  progrium/consul

2. Start the consul container

[root@localhost ~]# docker run -d --restart always -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui
3ae189bff461fc03e004598c399b5746085918a84effc0cea8c15306bfa53e3b

3. Close the firewalls of the three hosts

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# iptables -F
[root@localhost ~]# iptables-save

4. Specify consul server options on all nodes

[root@localhost ~]# vi /usr/lib/systemd/system/docker.service 
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2376 --cluster-store=consul://192.168.2.5:8500 --cluster-advertise=ens33:2376
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker

Note: Here you need to lower the docker version to connect to consul

[root@localhost ~]# yum -y remove docker-ce
[root@localhost ~]# yum -y install docker-ce-20.10.24-3.el7

5. After completion, one of the nodes on the node

[root@localhost ~]# docker network create -d overlay --subnet 192.168.6.0/24 --gateway 192.168.6.254 ov_net1

6. Generate containers for the overlay network created on the two nodes respectively, and verify the overlay network

#node1操作
[root@localhost ~]# docker run -it --name test1 --network ov_net1 busybox
#node2操作
[root@localhost ~]# docker run -it --name test2--network ov_net1 busybox

7. Ping the same network with each other

/ # ping 192.168.6.1
PING 192.168.6.1 (192.168.6.1): 56 data bytes
64 bytes from 192.168.6.1: seq=0 ttl=64 time=0.450 ms
64 bytes from 192.168.6.1: seq=1 ttl=64 time=0.272 ms
64 bytes from 192.168.6.1: seq=2 ttl=64 time=0.379 ms

Guess you like

Origin blog.csdn.net/weixin_53678904/article/details/131727514